The EU AI Act requires companies to classify all AI systems into risk categories and comply with requirements specific to each category: prohibited systems must be removed, high-risk require documentation and conformity assessments, limited risk require transparency.
The regulation applies to any company that develops, deploys, or makes available AI systems within the EU, regardless of where the company is headquartered. This extraterritorial scope means US, UK, and other non-EU companies serving EU markets must comply.
TL;DR: EU AI Act requirements depend on risk category, from nothing (minimal) to a complete ban (prohibited).
EU AI Act: The EU's comprehensive regulation governing AI systems, organized by risk level.
The EU AI Act is the world's first comprehensive AI regulation. It entered into force in August 2024, with obligations phasing in from February 2025 through August 2027. Organizations need to understand exactly what is required and by when.
This post covers the four risk categories, specific requirements for each, who the law applies to, and the penalties for non-compliance.
Four Risk Categories
Every AI system falls into one of four risk categories. The category determines what you must do.
| Category | Examples | Requirements | Deadline |
|---|---|---|---|
| Prohibited | Social scoring, real-time biometric surveillance, manipulative AI | Must be removed entirely | February 2025 |
| High-risk | HR/recruitment AI, credit scoring, medical devices, critical infrastructure | Full conformity assessment, documentation, monitoring, human oversight | August 2026 |
| Limited risk | Chatbots, deepfake generators, emotion recognition | Transparency and disclosure obligations | August 2025 |
| Minimal risk | Spam filters, AI-powered games, inventory management | No mandatory requirements (voluntary codes encouraged) | None |
Most enterprise AI usage falls into the limited or high-risk categories. The classification depends on the use case, not the technology. The same AI model can be minimal risk in one application and high-risk in another.
Prohibited Practices
Eight categories of AI are completely banned. These prohibitions took effect in February 2025 and apply immediately.
- Social scoring: AI systems that score individuals based on social behavior or personality traits for detrimental treatment
- Real-time biometric identification: Live facial recognition in public spaces for law enforcement (with narrow exceptions)
- Emotion recognition in workplaces and schools: AI that infers emotions of employees or students
- Manipulative AI: Systems that deploy subliminal or manipulative techniques to distort behavior
- Exploitation of vulnerabilities: AI targeting people based on age, disability, or social situation
- Predictive policing: AI predicting criminal behavior based solely on profiling or personality traits
- Untargeted facial image scraping: Building facial recognition databases from internet or CCTV scraping
- Biometric categorization for sensitive attributes: Inferring race, political opinions, or sexual orientation from biometrics
Organizations should audit all AI systems against this list immediately. For a step-by-step compliance approach, see our EU AI Act compliance guide.
High-Risk Requirements
High-risk AI systems face the most demanding requirements. Compliance requires documented processes across seven areas.
- Risk management system: Continuous identification, analysis, and mitigation of risks throughout the AI system lifecycle
- Data governance: Training and testing data must meet quality criteria, be relevant, representative, and free of errors
- Technical documentation: Detailed documentation of system design, development, and intended purpose before market placement
- Record-keeping: Automatic logging of events during operation, with logs retained for the system lifetime
- Transparency: Clear instructions for deployers including system capabilities, limitations, and human oversight measures
- Human oversight: Systems must be designed to allow effective human oversight during use
- Accuracy, robustness, cybersecurity: Systems must meet appropriate levels of accuracy and be resilient to errors and attacks
Get AI Governance Sorted in 48 Hours
PolicyGuard enforces AI policies automatically, detects shadow AI, and generates audit documentation.
Start free trialPolicyGuard helps companies like yours get AI governance documentation audit-ready in 48 hours or less.
Start free trial →Who It Applies To
The EU AI Act has extraterritorial reach. Location alone does not determine applicability.
- Providers: Companies that develop or place AI systems on the EU market, regardless of where they are based
- Deployers: Organizations that use AI systems within the EU, even if the system was built elsewhere
- Importers and distributors: Companies that bring non-EU AI systems into the EU market
- Non-EU companies: Any company whose AI system output is used within the EU
If your AI system affects people in the EU or your AI output is used in the EU, the Act likely applies to you. For a broader regulatory view, see our 2026 AI regulatory compliance guide.
Penalties
The EU AI Act penalties are among the highest in technology regulation, designed to ensure compliance even from the largest companies.
| Violation | Maximum Fine | Authority |
|---|---|---|
| Prohibited AI practices | 35 million euros or 7% of global annual revenue | National market surveillance authorities |
| High-risk non-compliance | 15 million euros or 3% of global annual revenue | National market surveillance authorities |
| Providing incorrect information | 7.5 million euros or 1% of global annual revenue | National market surveillance authorities |
For SMEs and startups, fines are capped at the lower of the percentage or fixed amount. The regulation also allows national authorities to impose additional corrective measures including withdrawal of AI systems from the market.
FAQ
Does the EU AI Act apply to US companies?
Yes, if the AI system is placed on the EU market or its output is used within the EU. The extraterritorial scope mirrors GDPR. US companies serving EU customers or users must comply.
When do we need to be compliant?
Prohibited practices: February 2025 (already in effect). AI literacy requirements: February 2025. Limited risk transparency: August 2025. High-risk systems: August 2026. General-purpose AI: August 2025-2027 depending on classification.
What counts as a high-risk AI system?
AI used in hiring, credit decisions, education access, critical infrastructure, law enforcement, immigration, and judicial processes. Annex III of the Act lists all high-risk categories. The classification depends on the use case, not the underlying technology.
Do we need to classify every AI tool we use?
Yes. The Act requires organizations to understand which AI systems they use and their risk classifications. This is why an AI tool inventory is a prerequisite for compliance.
Can we still use ChatGPT and similar tools?
Yes, for most use cases. General-purpose AI chatbots typically fall under limited risk (transparency obligations) for most business applications. However, using them for high-risk decisions (hiring, credit) triggers additional requirements.
Get AI Governance Sorted in 48 Hours
PolicyGuard enforces AI policies automatically, detects shadow AI, and generates audit documentation.
Start free trial








