EU AI Act phases: prohibited practices banned Feb 2025, GPAI obligations Aug 2025, high-risk full compliance Aug 2026, remaining provisions Aug 2027.
The EU AI Act entered into force on August 1, 2024, but compliance obligations phase in over a three-year period. Each phase brings new requirements for different categories of AI systems and different roles in the AI value chain. Missing a phase deadline means operating in violation from that date forward, with penalties scaling up to 35 million euros or 7% of global annual turnover for the most serious infractions.
Who This Applies To: Providers, deployers, importers, distributors of AI systems on the EU market or affecting EU residents, regardless of organization location.
The EU AI Act is the world's first comprehensive AI law, and its enforcement timeline is more complex than most organizations realize. Unlike regulations with a single compliance deadline, the AI Act phases in requirements over three years, with different obligations activating at different times for different types of AI systems and different actors in the supply chain.
This guide provides the definitive enforcement timeline, explains exactly what becomes mandatory at each phase, identifies which organizations must act at each stage, and details the penalty structure that applies from day one.
What It Requires
The EU AI Act creates a comprehensive regulatory framework organized around risk levels. Understanding the phased enforcement requires knowing which requirements apply to which risk categories and when each phase begins.
Prohibited AI practices (highest risk). Certain AI applications are banned entirely. These include social scoring systems by public authorities, real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions), AI that exploits vulnerabilities of specific groups based on age, disability, or social situation, AI systems that infer emotions in workplaces and educational institutions (with limited exceptions), untargeted scraping of facial images from the internet or CCTV to build facial recognition databases, and AI-based manipulation techniques that cause significant harm.
General-purpose AI (GPAI) models. Providers of GPAI models, including large language models, must comply with transparency requirements regardless of how their models are used downstream. This includes maintaining technical documentation, providing information to downstream providers integrating the model, implementing policies to comply with EU copyright law, and publishing sufficiently detailed summaries of training data. GPAI models with systemic risk (those trained with total computing power exceeding 10^25 FLOPs) face additional obligations including model evaluation, adversarial testing, tracking and reporting serious incidents, and ensuring adequate cybersecurity protections.
High-risk AI systems. AI systems classified as high-risk must meet extensive requirements including risk management systems, data governance for training and testing data, technical documentation, record-keeping and logging, transparency to users, human oversight measures, accuracy, robustness, and cybersecurity standards. High-risk categories include AI used in biometric identification, critical infrastructure management, education and vocational training, employment and worker management, access to essential services including credit scoring, law enforcement, migration and border control, and administration of justice.
Limited-risk AI systems. AI systems with specific transparency risks must meet disclosure requirements. This includes AI systems that interact with people (chatbots must disclose they are AI), systems that generate synthetic content (deepfakes must be labeled), and emotion recognition or biometric categorization systems (users must be informed).
Minimal-risk AI systems. AI systems that do not fall into the above categories can be deployed freely under the AI Act, though codes of conduct are encouraged and other laws like GDPR still apply.
Key Dates
| Date | Phase | What Takes Effect | Who Must Act |
|---|---|---|---|
| August 1, 2024 | Entry into force | AI Act officially becomes law; definitions and general provisions active | All actors should begin preparations |
| February 2, 2025 | Phase 1: Prohibitions | All prohibited AI practices become unlawful; AI literacy obligations apply | All organizations must cease prohibited practices; providers and deployers must ensure AI literacy |
| August 2, 2025 | Phase 2: GPAI | General-purpose AI model obligations take effect including transparency, documentation, and copyright compliance; systemic risk GPAI face additional obligations; governance structures including AI Office operational | GPAI model providers (OpenAI, Anthropic, Google, Meta, Mistral, etc.); organizations using GPAI models must understand downstream obligations |
| August 2, 2026 | Phase 3: High-risk (main) | Full compliance required for high-risk AI systems listed in Annex III; conformity assessments, risk management, data governance, technical documentation, human oversight all mandatory; penalties fully enforceable | Providers, deployers, importers, distributors of high-risk AI systems in all Annex III categories |
| August 2, 2027 | Phase 4: Remaining high-risk | Obligations for high-risk AI systems that are safety components of products already regulated by EU harmonized legislation (Annex I); product-specific conformity assessments | Manufacturers of products with AI safety components (medical devices, machinery, vehicles, aviation, etc.) |
| August 2, 2030 | Phase 5: Existing systems | High-risk AI systems already in use by public authorities must achieve full compliance | EU public authorities using legacy high-risk AI systems |
Penalties
The EU AI Act establishes a three-tier penalty structure, with the highest penalties reserved for the most serious violations. Penalties apply from the date each phase takes effect and are enforced by national competent authorities in each EU member state, with the AI Office enforcing GPAI obligations directly.
Tier 1: Prohibited AI practices. Violations of the AI Act's prohibitions on banned AI practices carry penalties of up to 35 million euros or 7% of total worldwide annual turnover, whichever is higher. This is the highest penalty tier in the AI Act and applies from February 2, 2025, when prohibited practices became unlawful. For SMEs and startups, the penalty is the lower of the two amounts rather than the higher.
Tier 2: High-risk AI and GPAI obligations. Violations of obligations for high-risk AI systems, GPAI model requirements, and other substantive provisions carry penalties of up to 15 million euros or 3% of total worldwide annual turnover, whichever is higher. This covers failures in risk management, data governance, transparency, human oversight, conformity assessments, and post-market monitoring. This tier applies progressively as each phase takes effect.
Tier 3: Incorrect information. Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities carries penalties of up to 7.5 million euros or 1% of total worldwide annual turnover, whichever is higher. This applies to documentation, declarations of conformity, and responses to regulatory requests.
Additional enforcement mechanisms: National authorities can also order the withdrawal or recall of non-compliant AI systems from the market, require corrective actions within specified timeframes, and restrict or prohibit the making available of AI systems that pose risks. The reputational impact of a public enforcement action or market withdrawal can exceed the financial penalty.
EU AI Act Enforcement Compliance Checklist
- ☐ Verify that no prohibited AI practices are in use anywhere in the organization, including social scoring, manipulative AI, and unauthorized biometric systems
- ☐ Implement AI literacy training programs for all staff involved in AI development, deployment, or oversight as required since February 2025
- ☐ Inventory all AI systems and classify each by risk level (prohibited, high-risk Annex III, high-risk Annex I, limited-risk, minimal-risk)
- ☐ For organizations using GPAI models: verify that your GPAI providers comply with transparency and documentation obligations effective since August 2025
- ☐ For high-risk AI systems: implement complete risk management systems with ongoing monitoring and updating procedures ahead of August 2026
- ☐ Prepare technical documentation and establish data governance practices for all high-risk AI systems including training data provenance and quality measures
- ☐ Implement human oversight mechanisms for high-risk AI systems ensuring that qualified individuals can effectively oversee AI outputs and intervene when necessary
- ☐ Establish conformity assessment procedures and identify whether self-assessment or third-party assessment applies to each high-risk AI system
- ☐ Register high-risk AI systems in the EU database and prepare declarations of conformity for each system before the August 2026 deadline
Do Not Miss an EU AI Act Deadline
PolicyGuard tracks every enforcement phase, maps your AI systems to applicable deadlines, and generates compliance documentation for each phase. Start now so August 2026 is not a crisis.
Start free trialPolicyGuard helps companies like yours get AI governance documentation audit-ready in 48 hours or less.
Start free trial →How PolicyGuard Helps
The phased enforcement timeline means organizations cannot take a single compliance snapshot and be done. Each phase activates new requirements, and the compliance posture must evolve accordingly. PolicyGuard manages this complexity through continuous monitoring against the enforcement timeline.
PolicyGuard's AI inventory classifies every system by EU AI Act risk level, then maps each system to the specific enforcement phase when its obligations activate. The platform generates phase-specific compliance dashboards showing exactly which requirements apply now, which are coming next, and your current compliance status for each. For high-risk systems approaching the August 2026 deadline, PolicyGuard provides conformity assessment templates, risk management system documentation, and technical documentation frameworks that meet the AI Act's detailed requirements. The platform also monitors GPAI providers for compliance with their obligations, giving you visibility into whether your upstream AI vendors are meeting their transparency and documentation requirements. See our EU AI Act compliance guide for detailed requirements by risk category, our guide on what the EU AI Act requires for a complete breakdown, and our guide to mapping AI tools to EU AI Act categories for classification methodology.
FAQ
Are we already in violation if we have not started compliance?
Potentially, yes. Phase 1 took effect on February 2, 2025, making prohibited AI practices unlawful and requiring AI literacy programs. If your organization uses any prohibited AI applications or has not implemented AI literacy training for relevant staff, you are already in violation. Phase 2 took effect on August 2, 2025, activating GPAI obligations. While these primarily affect GPAI model providers, deployers using GPAI models should understand their downstream obligations. The most impactful phase for most organizations, Phase 3 for high-risk AI systems, takes effect on August 2, 2026, giving you a limited window to achieve full compliance.
Does the EU AI Act apply to companies outside the EU?
Yes. The AI Act applies to providers placing AI systems on the EU market regardless of where they are established, deployers of AI systems located within the EU, and providers and deployers located outside the EU where the output of the AI system is used in the EU. This extraterritorial scope means a US company whose AI system produces outputs used by EU-based customers is subject to the AI Act. The territorial reach is similar to GDPR and catches most organizations with any EU-facing operations.
What is the difference between Annex I and Annex III high-risk AI?
Annex III lists standalone high-risk AI use cases by domain, such as employment, education, credit scoring, and law enforcement. These face full compliance requirements starting August 2, 2026. Annex I covers AI systems that are safety components of products already regulated by existing EU harmonized legislation, such as medical devices under the Medical Devices Regulation, machinery under the Machinery Regulation, and vehicles under type-approval regulations. Annex I systems have an extended deadline of August 2, 2027, because their compliance must be coordinated with existing product safety frameworks.
How do GPAI obligations affect organizations that just use AI tools?
GPAI obligations under Phase 2 primarily affect GPAI model providers (companies like OpenAI, Google, Anthropic, Meta). However, organizations deploying AI tools built on GPAI models should verify that their providers comply with GPAI obligations, because non-compliant GPAI models may create downstream compliance risks. Additionally, if you fine-tune or substantially modify a GPAI model, you may assume provider obligations for the modified version. PolicyGuard tracks GPAI provider compliance status so deployers have visibility into their supply chain.
Can we get an extension on compliance deadlines?
No. The enforcement dates are fixed in the regulation and apply uniformly. There is no mechanism for individual organizations to request extensions. The only exception is the extended timeline for AI systems already in use by public authorities, which have until August 2, 2030. For all other organizations, the deadlines are firm. The three-year phased approach was designed to provide adequate preparation time, and regulators have signaled that they expect organizations to have used this time effectively.
Track Every EU AI Act Deadline Automatically
PolicyGuard maps your AI systems to enforcement phases, monitors compliance status, and alerts you when action is needed. Do not let a deadline catch you off guard.
Start free trial








