What Does the EU AI Act Require From Companies?

P
PolicyGuard Team
5 min read
What Does the EU AI Act Require From Companies? - PolicyGuard AI

The EU AI Act requires companies to classify all AI systems into risk categories and comply with requirements specific to each category: prohibited systems must be removed, high-risk require documentation and conformity assessments, limited risk require transparency.

The regulation applies to any company that develops, deploys, or makes available AI systems within the EU, regardless of where the company is headquartered. This extraterritorial scope means US, UK, and other non-EU companies serving EU markets must comply.

TL;DR: EU AI Act requirements depend on risk category, from nothing (minimal) to a complete ban (prohibited).

EU AI Act: The EU's comprehensive regulation governing AI systems, organized by risk level.

The EU AI Act is the world's first comprehensive AI regulation. It entered into force in August 2024, with obligations phasing in from February 2025 through August 2027. Organizations need to understand exactly what is required and by when.

This post covers the four risk categories, specific requirements for each, who the law applies to, and the penalties for non-compliance.

Four Risk Categories

Every AI system falls into one of four risk categories. The category determines what you must do.

CategoryExamplesRequirementsDeadline
ProhibitedSocial scoring, real-time biometric surveillance, manipulative AIMust be removed entirelyFebruary 2025
High-riskHR/recruitment AI, credit scoring, medical devices, critical infrastructureFull conformity assessment, documentation, monitoring, human oversightAugust 2026
Limited riskChatbots, deepfake generators, emotion recognitionTransparency and disclosure obligationsAugust 2025
Minimal riskSpam filters, AI-powered games, inventory managementNo mandatory requirements (voluntary codes encouraged)None

Most enterprise AI usage falls into the limited or high-risk categories. The classification depends on the use case, not the technology. The same AI model can be minimal risk in one application and high-risk in another.

Prohibited Practices

Eight categories of AI are completely banned. These prohibitions took effect in February 2025 and apply immediately.

  • Social scoring: AI systems that score individuals based on social behavior or personality traits for detrimental treatment
  • Real-time biometric identification: Live facial recognition in public spaces for law enforcement (with narrow exceptions)
  • Emotion recognition in workplaces and schools: AI that infers emotions of employees or students
  • Manipulative AI: Systems that deploy subliminal or manipulative techniques to distort behavior
  • Exploitation of vulnerabilities: AI targeting people based on age, disability, or social situation
  • Predictive policing: AI predicting criminal behavior based solely on profiling or personality traits
  • Untargeted facial image scraping: Building facial recognition databases from internet or CCTV scraping
  • Biometric categorization for sensitive attributes: Inferring race, political opinions, or sexual orientation from biometrics

Organizations should audit all AI systems against this list immediately. For a step-by-step compliance approach, see our EU AI Act compliance guide.

High-Risk Requirements

High-risk AI systems face the most demanding requirements. Compliance requires documented processes across seven areas.

  1. Risk management system: Continuous identification, analysis, and mitigation of risks throughout the AI system lifecycle
  2. Data governance: Training and testing data must meet quality criteria, be relevant, representative, and free of errors
  3. Technical documentation: Detailed documentation of system design, development, and intended purpose before market placement
  4. Record-keeping: Automatic logging of events during operation, with logs retained for the system lifetime
  5. Transparency: Clear instructions for deployers including system capabilities, limitations, and human oversight measures
  6. Human oversight: Systems must be designed to allow effective human oversight during use
  7. Accuracy, robustness, cybersecurity: Systems must meet appropriate levels of accuracy and be resilient to errors and attacks

Get AI Governance Sorted in 48 Hours

PolicyGuard enforces AI policies automatically, detects shadow AI, and generates audit documentation.

Start free trial

PolicyGuard helps companies like yours get AI governance documentation audit-ready in 48 hours or less.

Start free trial →

Who It Applies To

The EU AI Act has extraterritorial reach. Location alone does not determine applicability.

  • Providers: Companies that develop or place AI systems on the EU market, regardless of where they are based
  • Deployers: Organizations that use AI systems within the EU, even if the system was built elsewhere
  • Importers and distributors: Companies that bring non-EU AI systems into the EU market
  • Non-EU companies: Any company whose AI system output is used within the EU

If your AI system affects people in the EU or your AI output is used in the EU, the Act likely applies to you. For a broader regulatory view, see our 2026 AI regulatory compliance guide.

Penalties

The EU AI Act penalties are among the highest in technology regulation, designed to ensure compliance even from the largest companies.

ViolationMaximum FineAuthority
Prohibited AI practices35 million euros or 7% of global annual revenueNational market surveillance authorities
High-risk non-compliance15 million euros or 3% of global annual revenueNational market surveillance authorities
Providing incorrect information7.5 million euros or 1% of global annual revenueNational market surveillance authorities

For SMEs and startups, fines are capped at the lower of the percentage or fixed amount. The regulation also allows national authorities to impose additional corrective measures including withdrawal of AI systems from the market.

FAQ

Does the EU AI Act apply to US companies?

Yes, if the AI system is placed on the EU market or its output is used within the EU. The extraterritorial scope mirrors GDPR. US companies serving EU customers or users must comply.

When do we need to be compliant?

Prohibited practices: February 2025 (already in effect). AI literacy requirements: February 2025. Limited risk transparency: August 2025. High-risk systems: August 2026. General-purpose AI: August 2025-2027 depending on classification.

What counts as a high-risk AI system?

AI used in hiring, credit decisions, education access, critical infrastructure, law enforcement, immigration, and judicial processes. Annex III of the Act lists all high-risk categories. The classification depends on the use case, not the underlying technology.

Do we need to classify every AI tool we use?

Yes. The Act requires organizations to understand which AI systems they use and their risk classifications. This is why an AI tool inventory is a prerequisite for compliance.

Can we still use ChatGPT and similar tools?

Yes, for most use cases. General-purpose AI chatbots typically fall under limited risk (transparency obligations) for most business applications. However, using them for high-risk decisions (hiring, credit) triggers additional requirements.

Get AI Governance Sorted in 48 Hours

PolicyGuard enforces AI policies automatically, detects shadow AI, and generates audit documentation.

Start free trial
EU AI ActAI RegulationsAI Compliance

Frequently Asked Questions

Does the EU AI Act apply to companies outside the EU?+
Yes. The EU AI Act has extraterritorial reach, similar to GDPR. It applies to any organization that places AI systems on the EU market or whose AI systems produce outputs that are used within the EU, regardless of where the company is headquartered. A US-based software company whose AI product is used by EU customers must comply. A Japanese manufacturer deploying AI-driven quality control in a European factory is also covered. The only exemptions are AI systems used exclusively for military or national security purposes and purely personal non-professional use. Companies operating globally should assume the Act applies if any portion of their AI footprint touches the EU.
What AI uses are completely prohibited under the EU AI Act?+
The EU AI Act identifies several categories of AI use that are outright banned. These include social scoring systems that evaluate citizens based on behavior or personality traits, real-time remote biometric identification in public spaces for law enforcement except in narrowly defined emergencies, AI systems that exploit vulnerabilities of specific groups such as children or people with disabilities, manipulative AI techniques that deploy subliminal methods to distort behavior in harmful ways, and untargeted scraping of facial images from the internet or CCTV footage to build recognition databases. These prohibitions carry the heaviest penalties under the Act and took effect ahead of other requirements.
What are the maximum penalties for EU AI Act non-compliance?+
The EU AI Act establishes a tiered penalty structure. The most severe fines apply to deploying prohibited AI practices: up to 35 million euros or seven percent of total worldwide annual revenue, whichever is higher. For violations related to high-risk AI system requirements, fines can reach 15 million euros or three percent of global annual revenue. Supplying incorrect or misleading information to authorities can result in fines up to 7.5 million euros or one percent of global annual revenue. For SMEs and startups, the lower of the two thresholds in each tier applies. These penalties are designed to be proportionate but dissuasive, following the model established by GDPR enforcement.
When do EU AI Act requirements fully come into effect?+
The EU AI Act follows a phased implementation timeline. The prohibitions on banned AI practices took effect in February 2025, six months after the Act entered into force. Obligations for general-purpose AI models and governance structures became applicable in August 2025. The bulk of the requirements, including rules for high-risk AI systems listed in Annex III, apply from August 2026. Requirements for high-risk AI systems that are safety components of products already regulated under existing EU product legislation apply from August 2027. Organizations should not wait for their specific deadline, as building compliant processes, documentation, and risk management frameworks takes considerable lead time.
How do you determine which risk category your AI systems fall into?+
The EU AI Act classifies AI systems into four risk tiers. Unacceptable risk covers prohibited practices like social scoring. High risk includes AI used in critical infrastructure, education, employment, essential services, law enforcement, and migration management as enumerated in Annex III. Limited risk covers systems like chatbots and deepfake generators that must meet transparency obligations such as disclosing that content is AI-generated. Minimal risk encompasses everything else and faces no specific obligations. To classify your systems, map each AI application against the Annex III categories and conduct a contextual assessment of how the system is used, what decisions it influences, and what populations it affects.

PolicyGuard Team

PolicyGuard

Building PolicyGuard AI — the compliance layer for enterprise AI governance.

Continue Reading

Ready to get AI governance sorted?

Join companies using PolicyGuard to enforce AI policies and generate audit-ready documentation.

Ready to govern every AI tool your team uses?

One platform to enforce policies, track compliance, and prove governance across 80+ AI tools.

Book a demo