Implementing the EU AI Act requires 7 steps: build AI system inventory, apply prohibited use assessment, classify by risk tier, implement high-risk compliance requirements, add limited risk transparency, establish ongoing monitoring, and prepare for regulatory examination.
The EU AI Act uses a risk-based approach that categorizes AI systems into four tiers: unacceptable risk (prohibited), high risk (heavy requirements), limited risk (transparency obligations), and minimal risk (no specific requirements). Implementation focuses on identifying which tier each of your AI systems falls into and then meeting the corresponding requirements.
The EU AI Act is the most comprehensive AI regulation in the world, and its requirements apply to any organization that deploys AI systems affecting individuals in the European Union, regardless of where the organization is headquartered. Implementation is not optional for organizations operating in or serving EU markets, and the penalties for non-compliance are substantial: up to 35 million euros or seven percent of global annual turnover for the most serious violations. Even organizations outside the EU are affected if their AI systems are used by or impact EU residents.
This guide is for compliance officers, legal teams, CTOs, and data protection officers who need to implement the EU AI Act within their organization. By the end, you will have a complete implementation program covering every AI system in your inventory, with risk classifications, required controls, and a monitoring framework that keeps you compliant as the regulatory landscape evolves. You need access to your organization's AI system inventory or the ability to build one, and at least one stakeholder from legal who understands EU regulatory requirements.
For a broader overview of what the EU AI Act requires, see our EU AI Act compliance guide. For detailed analysis of the specific requirements, see our guide on what the EU AI Act requires.
Before You Start
Complete these preparations before beginning implementation:
- Legal counsel with EU regulatory expertise: The EU AI Act contains specific legal terms, exceptions, and cross-references to other EU regulations like GDPR that require legal interpretation. Ensure you have access to legal counsel who understands EU regulatory frameworks.
- AI system documentation: Gather technical documentation for every AI system you use or develop, including vendor documentation for third-party systems. You will need this for risk classification and compliance requirement mapping.
- Organizational scope: Determine which parts of your organization interact with EU markets. If you serve EU customers, process EU resident data, or deploy AI systems that affect EU individuals, the regulation applies.
- Time estimate: Simple implementations with fewer than 10 AI systems take 5-10 weeks. Complex implementations with 10 or more systems take 11-25 weeks. The largest variables are the number of high-risk systems and the availability of technical documentation.
Step-by-Step: How to Implement the EU AI Act
Step 1: Build a Complete AI System Inventory
The AI system inventory is the starting point for EU AI Act compliance because you cannot classify risk or implement controls for systems you do not know about. The EU AI Act defines AI systems broadly, including not just standalone AI applications but also AI components embedded in larger systems, AI-powered features within existing software, and AI services consumed through APIs. Many organizations underestimate their AI footprint because they count only the obvious tools like ChatGPT while overlooking AI embedded in their CRM, recruitment platform, fraud detection, or customer service tools.
Build the inventory by cataloging every system, tool, or service that meets the EU AI Act definition of an AI system. For each system, document: system name and vendor, a technical description of what the AI component does, whether your organization is the provider or deployer under the Act (providers develop or place AI on market, deployers use AI under their authority), the categories of individuals affected by the system such as employees, customers, or EU residents, the types of decisions or outputs the system produces, the data inputs the system processes, the geographic scope of the system's impact specifically whether it affects EU individuals, and the current contractual relationship with the vendor including any AI Act compliance commitments from the vendor. Cross-reference your general IT asset inventory, procurement records, vendor contracts, and department surveys to ensure completeness. Pay particular attention to AI embedded in platforms you already use, because SaaS vendors are rapidly adding AI features that may trigger EU AI Act obligations.
You need access to IT asset management records, vendor contracts, technical architecture documentation, and input from department heads about AI tools in use. PolicyGuard's AI system inventory includes EU AI Act classification fields built in. This step is done when you have a comprehensive inventory of every AI system with all required fields populated, you have confirmed geographic scope for each system, and you have identified whether your organization acts as provider or deployer for each. The most common mistake is overlooking embedded AI in existing platforms. When a CRM vendor adds AI-powered lead scoring or a recruitment platform adds AI resume screening, those features become AI systems subject to the EU AI Act even though you did not specifically procure an AI tool.
Step 2: Apply Prohibited Use Assessment
The prohibited use assessment determines whether any of your AI systems fall into the EU AI Act's unacceptable risk category, which means they must be discontinued immediately. This step comes before general risk classification because prohibited systems have no compliance pathway. You cannot implement controls to make a prohibited system compliant because the Act bans these uses entirely. Discovering a prohibited use during an audit or regulatory examination has severe consequences including the maximum penalty tier, so proactive assessment is essential.
Review each AI system against the EU AI Act's prohibited practices list. The Act prohibits AI systems that use subliminal, manipulative, or deceptive techniques to distort behavior causing significant harm, that exploit vulnerabilities related to age, disability, or socioeconomic situation, that perform social scoring by public authorities leading to detrimental treatment, that conduct untargeted scraping of facial images from the internet or CCTV to build facial recognition databases, that perform emotion recognition in workplace or educational settings except for safety or medical reasons, that use biometric categorization to infer sensitive attributes like race or political opinions, that perform real-time remote biometric identification in public spaces for law enforcement except in narrowly defined circumstances, and that perform individual risk assessments predicting criminal offenses based solely on profiling. For each AI system, document whether any prohibited practice applies, provide the rationale for the determination, and have legal counsel review the assessment. If any system is found to involve a prohibited practice, escalate immediately to legal and executive leadership for discontinuation planning.
You need the completed AI system inventory from Step 1, the full text of the EU AI Act prohibited practices provisions, and legal counsel to validate your assessment. This step is done when every AI system has been assessed against the prohibited practices list, the assessment is documented with rationale for each determination, legal counsel has reviewed and approved the assessment, and any prohibited systems have been flagged for immediate discontinuation. The most common mistake is conducting the assessment based on the vendor's description of the system rather than analyzing actual use within your organization. A tool that is not inherently prohibited may become prohibited based on how your organization deploys it, such as using an emotion recognition tool for employee performance monitoring.
Step 3: Classify by EU AI Act Risk Tier
Risk classification determines the regulatory requirements that apply to each AI system. The EU AI Act uses four tiers: unacceptable (prohibited, addressed in Step 2), high risk (extensive requirements), limited risk (transparency obligations), and minimal risk (no specific requirements beyond voluntary codes). Getting the classification wrong means either over-investing in compliance for low-risk systems or under-investing in compliance for high-risk systems, both of which create problems during regulatory examination.
For each non-prohibited AI system, apply the classification criteria from the EU AI Act. High-risk systems include AI used as safety components of products covered by EU harmonization legislation, AI used in biometric identification or categorization, AI used in management and operation of critical infrastructure, AI used in education and vocational training for determining access or outcomes, AI used in employment and worker management including recruitment, AI used in access to essential services including credit scoring and insurance, AI used in law enforcement, migration, and border control, and AI used in administration of justice and democratic processes. Limited-risk systems include AI that interacts directly with individuals such as chatbots where the user may not realize they are interacting with AI, AI that generates synthetic content including deepfakes, and emotion recognition or biometric categorization systems not classified as high risk. Minimal-risk systems are everything else. For each system classified as high risk, document the specific Annex III category it falls under and the rationale. For limited-risk classifications, document the transparency trigger.
You need the completed inventory and prohibited use assessment, the EU AI Act Annex III high-risk categories, and legal counsel for borderline classifications. This step is done when every AI system has a documented risk tier classification with supporting rationale, legal counsel has validated classifications for all high-risk and borderline systems, and you have a summary showing the total count of systems per tier. The most common mistake is classifying systems based on the AI technology used rather than the use case. A large language model is not inherently high risk, but the same model used for automated resume screening in recruitment is high risk because of the Annex III employment category. Classification depends on application context, not technical architecture.
Step 4: Implement High-Risk Compliance Requirements
High-risk AI systems carry the most demanding requirements under the EU AI Act, and these requirements apply regardless of whether your organization is the provider or deployer, though the specific obligations differ. For deployers, which is what most organizations are when they use third-party AI tools, the requirements focus on proper use, monitoring, human oversight, and record-keeping. Failure to implement these requirements for even one high-risk system can result in enforcement action, so this step requires thorough, system-by-system implementation.
For each high-risk AI system where your organization is the deployer, implement these requirements. First, ensure appropriate human oversight: designate a qualified person responsible for overseeing the system's operation, document the oversight procedures including how the human monitor can intervene or override the system's outputs, and train the designated person on the system's capabilities and limitations. Second, maintain input data quality: document the data inputs the system processes, implement checks to ensure input data quality meets the system's requirements, and establish procedures for detecting and correcting data quality issues. Third, maintain operation logs: enable logging of the system's operations for the period required by the Act, store logs securely with appropriate access controls, and ensure logs can be made available to regulatory authorities upon request. Fourth, conduct a fundamental rights impact assessment for high-risk systems used in areas affecting individuals' rights: document the system's impact on equality, privacy, and other fundamental rights, identify risk mitigation measures, and consult with relevant stakeholders including data protection officers and employee representatives where applicable. Fifth, fulfill transparency obligations: inform individuals that they are subject to a high-risk AI system, provide information about the system's logic and impact to affected individuals where required, and maintain documentation of how transparency requirements are met.
You need the risk classification from Step 3, technical documentation from the AI system vendor, designated oversight personnel, and legal counsel to validate fundamental rights impact assessments. PolicyGuard provides high-risk compliance templates with requirement checklists for each Annex III category. This step is done when every high-risk system has documented human oversight procedures with designated personnel, input data quality processes are established, operation logging is enabled and verified, fundamental rights impact assessments are complete where required, and transparency obligations are implemented. The most common mistake is relying on the AI vendor to handle compliance on your behalf. As a deployer, your organization has independent obligations under the EU AI Act. The vendor's compliance with provider requirements does not satisfy your deployer requirements. You must implement your own controls even when using a compliant vendor's system.
Step 5: Add Limited Risk Transparency Disclosures
Limited-risk AI systems have a single primary obligation: transparency. Individuals interacting with or affected by these systems must be informed that they are interacting with AI or that content was AI-generated. While the requirements are lighter than high-risk systems, they are still enforceable, and non-compliance with transparency obligations carries penalties. Many organizations overlook limited-risk requirements because they focus all their attention on high-risk systems, creating a compliance gap that regulators can identify easily.
For each limited-risk system, implement the applicable transparency measures. For AI systems that interact with individuals, such as chatbots or virtual assistants, clearly disclose to the user that they are interacting with an AI system before or at the start of the interaction. The disclosure must be clear, concise, and noticeable, not buried in terms of service. Common implementations include a banner at the top of chat interfaces, an introductory message from the chatbot identifying itself as AI, or a persistent visual indicator. For AI systems that generate or manipulate content including text, images, audio, or video, label the output as AI-generated in a way that is machine-readable and detectable. For deepfakes or synthetic media specifically, disclosure must be visible to the end viewer. For emotion recognition or biometric categorization systems classified as limited risk, inform individuals that such a system is in operation and obtain consent where required under applicable law. Document the specific transparency measure implemented for each system, how it is displayed to individuals, and the date it was implemented.
You need the list of limited-risk systems from Step 3, UX or design resources to implement disclosures in user interfaces, and legal counsel to validate that disclosures meet the Act's requirements. This step is done when every limited-risk system has a documented and implemented transparency disclosure, the disclosure is visible and clear in the actual user experience, and you have evidence such as screenshots or configuration records showing the disclosure in place. The most common mistake is implementing disclosures that are technically present but practically invisible, such as a disclosure in eight-point font at the bottom of a page or a one-time notification that the user dismisses immediately. Regulators evaluate whether the disclosure effectively informs the individual, not whether it technically exists somewhere in the interface.
Step 6: Establish Ongoing Compliance Monitoring
EU AI Act compliance is not a one-time implementation but an ongoing obligation. AI systems change as vendors update them, new AI systems are adopted, risk classifications may shift as use cases evolve, and the regulatory framework itself will be supplemented by standards and guidance over time. Without ongoing monitoring, your initial compliance posture degrades over time until the next audit or regulatory examination reveals gaps that accumulated since implementation. Monitoring converts compliance from a periodic project into a continuous function.
Establish monitoring across four dimensions. First, system change monitoring: create a process for vendors to notify you of significant changes to AI systems you deploy, review vendor changelog or update notifications at least quarterly, and reassess risk classification when the system's functionality or data processing changes materially. Second, new system onboarding: integrate EU AI Act compliance assessment into your AI tool procurement and approval process so every new AI system is classified and equipped with required controls before deployment. Third, compliance metrics: track and report on the status of human oversight activities, log retention compliance, transparency disclosure functioning, fundamental rights impact assessment currency, and training completion for oversight personnel. Fourth, regulatory monitoring: track new standards, guidelines, and enforcement actions from EU regulatory authorities, join industry groups or subscribe to legal updates focused on EU AI Act developments, and assess the impact of new guidance on your existing classifications and controls. Assign ownership for each monitoring dimension and set review frequencies: monthly for compliance metrics, quarterly for system change review, and ongoing for regulatory monitoring.
You need a compliance monitoring tool or tracking system, assigned owners for each monitoring dimension, and relationships with AI system vendors for change notifications. PolicyGuard provides continuous compliance monitoring with automated alerts when system changes or regulatory updates affect your compliance posture. This step is done when monitoring processes are documented and assigned for all four dimensions, review frequencies are set with calendar reminders, initial baseline metrics are captured, and the first monitoring cycle has been completed to verify the process works. The most common mistake is monitoring only high-risk systems and ignoring changes to limited or minimal-risk systems. A minimal-risk system that adds new functionality involving employee data processing could shift to high-risk, and you will miss this reclassification trigger without monitoring all systems.
Step 7: Prepare for Regulatory Examination
Regulatory examination preparation is about organizing your compliance evidence into a format that demonstrates systematic, documented compliance to a regulator. Even before a formal examination occurs, being prepared reduces the organizational stress and disruption that regulatory inquiries cause. The EU AI Act grants national supervisory authorities broad investigative powers including the ability to request documentation, conduct audits, and access AI systems for testing. Organizations that cannot produce organized evidence quickly during an examination face extended investigations and increased scrutiny.
Prepare an examination-ready package containing the following documentation. First, the complete AI system inventory with risk classifications and assessment rationale. Second, the prohibited use assessment showing every system was evaluated against prohibited practices. Third, for each high-risk system: the human oversight documentation, input data quality procedures, operation log samples and retention evidence, fundamental rights impact assessment, and transparency implementation evidence. Fourth, for each limited-risk system: the transparency disclosure documentation with screenshots or configuration evidence. Fifth, the compliance monitoring framework showing ongoing processes, metrics, and review records. Sixth, the organizational structure for AI Act compliance including responsible persons, reporting lines, and training records for oversight personnel. Seventh, a chronological log of all compliance activities including assessments, implementations, reviews, and updates with dates. Store this package in an organized, quickly accessible location. Designate a primary contact who can respond to regulatory inquiries and locate any requested document within one business day. Conduct a tabletop exercise where someone plays the role of a regulator requesting specific evidence, and measure how quickly your team can produce it.
You need all documentation from Steps 1 through 6, a secure document management system, a designated regulatory contact, and time for the tabletop exercise. This step is done when the examination-ready package is compiled and organized, the designated contact can locate any document within one business day, a tabletop exercise has been completed with documented results and improvements, and the package is stored securely with appropriate access controls. The most common mistake is treating examination preparation as a one-time exercise before an anticipated examination. Regulatory inquiries can arrive without warning, triggered by complaints, media reports, or cross-border cooperation between supervisory authorities. Maintain examination readiness continuously rather than preparing only when you expect an examination.
Common Mistakes
- Classifying by technology instead of use case: The EU AI Act classifies risk by how AI is used, not what technology powers it. The same underlying model can be minimal risk in one application and high risk in another. Always classify based on the specific deployment context.
- Assuming vendor compliance covers deployer obligations: Providers and deployers have separate, independent obligations under the Act. Your AI vendor being compliant does not make you compliant. Implement your deployer-specific controls regardless of the vendor's compliance status.
- Ignoring embedded AI: AI features added to existing platforms by vendors create new compliance obligations. Monitor vendor updates and reassess when new AI capabilities are added to tools you already use.
- Static compliance: Initial classification and control implementation becomes outdated as systems, use cases, and regulatory guidance evolve. Establish continuous monitoring to maintain compliance over time.
- Minimal transparency effort: Transparency disclosures that technically exist but do not effectively inform individuals will not satisfy regulators. Design disclosures for actual comprehension, not just technical compliance.
Simplify EU AI Act Implementation
PolicyGuard provides AI system inventory with risk classification, compliance requirement mapping, evidence management, and continuous monitoring in one platform designed for EU AI Act compliance.
Start free trialPolicyGuard helps companies like yours get AI governance documentation audit-ready in 48 hours or less.
Start free trial →How Long This Takes
| Scenario | Timeline |
|---|---|
| Simple (fewer than 10 AI systems) | 5-10 weeks |
| Complex (10+ AI systems) | 11-25 weeks |
Frequently Asked Questions
Does the EU AI Act apply to organizations outside the EU?
Yes, the EU AI Act has extraterritorial scope. It applies to providers who place AI systems on the EU market regardless of where they are established, deployers of AI systems who are located within the EU, and providers and deployers located outside the EU where the output produced by the AI system is used in the EU. If your AI systems affect EU residents or are used by EU-based customers, the Act applies to your organization even if you have no physical presence in the EU.
What is the difference between a provider and a deployer under the EU AI Act?
A provider is the entity that develops an AI system or has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark. A deployer is the entity that uses an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity. Most organizations that use third-party AI tools are deployers. If you develop custom AI systems or significantly modify a third-party system, you may be considered a provider with correspondingly heavier obligations.
How do we handle AI systems that are difficult to classify?
For borderline classifications, apply the higher risk tier and implement its requirements. This conservative approach costs more in compliance effort but eliminates the risk of under-classification. Document the ambiguity, your analysis, and the rationale for the classification chosen. Consult with legal counsel specializing in EU AI regulation for systems where the classification has significant business impact. As regulatory guidance and standards evolve, revisit borderline classifications and adjust if clearer direction emerges.
What happens during an EU AI Act regulatory examination?
National supervisory authorities can request access to documentation, test AI systems, conduct on-site audits, and interview personnel. They typically request your AI system inventory with risk classifications, evidence of compliance measures for high-risk systems, transparency disclosures for limited-risk systems, and records of ongoing monitoring activities. Examinations can be triggered by complaints, market surveillance programs, or cross-border cooperation between authorities. Organizations that produce organized, comprehensive evidence quickly tend to resolve examinations faster with fewer findings.
Can we use PolicyGuard to manage EU AI Act compliance specifically?
Yes. PolicyGuard includes EU AI Act-specific features including AI system inventory with risk tier classification fields, prohibited use assessment templates, high-risk compliance requirement checklists mapped to Annex III categories, transparency disclosure tracking, continuous compliance monitoring with regulatory update alerts, and examination-ready evidence export. The platform maps your AI systems to their specific EU AI Act obligations and tracks compliance status across all requirements in a single dashboard.
Get EU AI Act Compliant
PolicyGuard maps every AI system to its EU AI Act obligations and tracks compliance across all requirements. From inventory to examination readiness in one platform.
Start free trial








