Mapping AI tools to EU AI Act risk categories requires 7 steps: build inventory, apply prohibited use test, test against Annex III high-risk criteria, classify limited risk tools, classify minimal risk, document rationale, implement controls by category.
The EU AI Act classifies AI systems into four risk tiers: prohibited, high-risk, limited risk, and minimal risk. Each tier carries different compliance obligations, from outright bans for prohibited systems to transparency requirements for limited risk and no additional obligations for minimal risk. Every organization using AI tools within the EU must determine which tier each tool falls into and implement the corresponding controls. Getting this classification wrong creates either compliance gaps that lead to enforcement or over-compliance that wastes resources.
The EU AI Act entered into force in August 2024, with compliance deadlines phased over the following years. Organizations operating in the EU or serving EU customers must classify every AI tool they use against the Act's risk categories and implement controls appropriate to each category. This is not optional, and it is not something you can defer until enforcement actions begin. This guide walks through seven steps to systematically classify your AI tool inventory, document your rationale, and implement the right controls for each risk category. For a broader overview of EU AI Act compliance, see our EU AI Act compliance guide.
Before You Start
Before beginning the classification process, establish three prerequisites. First, you need a complete AI tool inventory. You cannot classify tools you do not know about. If you have not already inventoried every AI tool in use across your organization, complete that step first using the approach described in our guide on tracking AI tool usage. Include tools used by individual departments, embedded AI features in existing SaaS products, and AI-powered automation workflows. Second, you need access to the EU AI Act text and supporting guidance documents, particularly Article 5 covering prohibited practices, Annex III listing high-risk use cases, and Article 52 addressing transparency obligations. The European AI Office publishes implementation guidance that clarifies how to apply these provisions. Third, assemble a classification team that includes legal counsel familiar with EU regulations, a technical representative who understands what each AI tool actually does, and a business representative who can explain how each tool is used in practice. Classification decisions require all three perspectives.
Step-by-Step Guide
Step 1: Build Complete AI Tool Inventory
Action: Create a comprehensive inventory of every AI tool your organization uses, operates, or deploys. For each tool, document the tool name and vendor, what the tool does in plain language, what data the tool processes including data types and classification levels, which departments and employees use the tool, how the tool is deployed including whether it is cloud-based or on-premises or embedded, whether your organization is a provider or deployer or user under the Act's definitions, and what decisions or outputs the tool influences. Include AI tools that are embedded in other products such as AI-powered search within your CRM, AI writing assistance in your email client, or AI analytics in your business intelligence platform.
Why this matters: The EU AI Act applies to AI systems, not to organizations in the abstract. Your compliance obligations are determined by the specific AI tools you use and how you use them. An incomplete inventory means you will miss tools that may fall into high-risk or even prohibited categories, creating compliance gaps that could result in fines of up to thirty-five million euros or seven percent of global turnover. The inventory also determines whether you are a provider, deployer, or user for each tool because the Act assigns different obligations based on your role. Getting the inventory wrong cascades into incorrect classification and incorrect controls.
Tools: Browser monitoring, OAuth detection, and DNS monitoring to identify AI tools in active use. Employee surveys to capture tools that technical monitoring may miss. SaaS management platforms that identify AI features in existing subscriptions. PolicyGuard provides automated AI tool discovery that feeds directly into the classification workflow.
Done when: Every AI tool in use across the organization is documented with all required attributes, the inventory has been validated by department heads who can confirm that no tools are missing, and each tool's provider/deployer/user role has been assigned.
Common mistake: Treating the inventory as an IT exercise and missing AI tools adopted by business departments without IT involvement. Marketing, HR, legal, and finance departments frequently adopt AI tools independently. Survey every department head directly.
Step 2: Apply Prohibited Use Assessment
Action: Review each tool in your inventory against the Article 5 prohibited practices. The EU AI Act prohibits several categories of AI systems including social scoring systems used by public authorities, real-time remote biometric identification in public spaces with limited exceptions, AI systems that exploit vulnerabilities of specific groups, subliminal manipulation techniques that cause harm, emotion recognition in workplaces and educational institutions with limited exceptions, and untargeted scraping of facial images for facial recognition databases. For each tool, document whether any of its features or use cases fall within a prohibited category. If a tool has multiple features, assess each feature independently because a tool may be permissible for one use case but prohibited for another.
Why this matters: Prohibited AI practices carry the highest penalties under the Act: fines up to thirty-five million euros or seven percent of annual global turnover, whichever is higher. The prohibited category must be assessed first because if a tool falls here, no amount of controls can make it compliant. It must be discontinued immediately. Organizations sometimes assume that prohibitions only apply to extreme cases, but the categories are broader than many expect. An AI tool used for employee emotion recognition during performance reviews, for example, may fall within the workplace emotion recognition prohibition depending on how it is implemented. Start with prohibitions to eliminate the highest-risk tools from your inventory before investing effort in classifying the rest.
Tools: A classification checklist built from Article 5 provisions, legal counsel review for borderline cases, and documentation templates for recording the assessment rationale. PolicyGuard provides a prohibited use screening questionnaire that maps each Article 5 provision to plain-language questions about tool functionality. For more on the broader risk management framework, see our AI risk management framework guide.
Done when: Every tool in the inventory has been assessed against all Article 5 prohibited categories, any tools identified as potentially prohibited have been flagged for immediate legal review, and the assessment rationale is documented for each tool.
Common mistake: Applying the prohibited use test only to tools that obviously involve surveillance or biometrics. The manipulation and exploitation categories are broader than biometrics and can apply to marketing tools, personalization engines, or recommendation systems depending on implementation. Assess every tool, not just the ones that seem obviously high-risk.
Step 3: Test Against Annex III High-Risk Criteria
Action: For every tool that passed the prohibited use assessment, evaluate whether it qualifies as a high-risk AI system under Annex III. The high-risk categories include AI used in biometric identification and categorization, management and operation of critical infrastructure, education and vocational training including tools that determine access to education, employment and worker management including recruitment and performance evaluation, access to essential private and public services including credit scoring and insurance, law enforcement, migration and border control, and administration of justice and democratic processes. For each tool, determine whether its use case falls within any Annex III category. Document the specific Annex III paragraph that applies or document why none applies.
Why this matters: High-risk classification triggers the most extensive compliance obligations under the Act, including risk management systems, data governance requirements, technical documentation, human oversight mechanisms, accuracy and robustness standards, and registration in the EU database. Organizations that incorrectly classify a high-risk tool as limited or minimal risk face enforcement action and fines up to fifteen million euros or three percent of global turnover. Conversely, organizations that over-classify tools as high-risk waste significant resources implementing controls that are not required. Accurate classification at this stage determines the efficiency of your entire compliance program.
Tools: Annex III reference document with all high-risk categories enumerated, a classification decision tree that walks assessors through each category with plain-language criteria, and legal review for any tool where the classification is ambiguous. PolicyGuard provides an Annex III assessment workflow that guides the assessor through each high-risk category with contextual explanations and examples.
Done when: Every non-prohibited tool has been assessed against all Annex III high-risk categories, tools classified as high-risk have the specific Annex III paragraph documented, borderline cases have been reviewed by legal counsel, and the assessment rationale is recorded for audit purposes.
Common mistake: Classifying tools based on their general category rather than their specific use case. A recruitment tool is not automatically high-risk. It is high-risk if it is used for making or materially influencing decisions about recruitment, selection, or terms of employment. The same tool used only for scheduling interviews without influencing hiring decisions may not qualify. Assess use cases, not tool categories.
Step 4: Classify Limited Risk (Transparency Obligations)
Action: For tools that are neither prohibited nor high-risk, assess whether they trigger limited risk transparency obligations under Article 52. Limited risk AI systems include chatbots and conversational AI where users must be informed they are interacting with an AI system, AI systems that generate or manipulate images, audio, or video content including deepfakes where the synthetic nature must be disclosed, and emotion recognition systems and biometric categorization systems where individuals must be informed of the system's operation. For each tool, document whether it triggers any transparency obligation and specify what disclosure is required.
Why this matters: Limited risk obligations are narrower than high-risk obligations but still legally binding. The transparency requirements apply to a large number of common enterprise AI tools. Customer-facing chatbots, AI content generation tools used for marketing, and AI-powered image editing tools all trigger transparency obligations. The compliance burden is manageable because the primary requirement is disclosure rather than the extensive documentation and control requirements that apply to high-risk systems. However, failing to provide required transparency disclosures is still a violation of the Act and carries penalties. Getting this classification right allows you to implement simple, low-cost transparency measures for the tools that need them without over-engineering compliance for tools that do not.
Tools: Article 52 reference checklist mapping transparency obligations to common AI tool types, content generation audit to identify all customer-facing AI interactions, and disclosure template library for common transparency implementations. PolicyGuard flags tools with transparency obligations and provides disclosure templates appropriate to each tool type.
Done when: All non-prohibited, non-high-risk tools have been assessed for transparency obligations, tools triggering transparency requirements have the specific obligation documented, and draft disclosure language has been prepared for each tool with transparency requirements.
Common mistake: Overlooking AI-generated content used in marketing materials, social media, or customer communications. If your marketing team uses AI to generate or substantially edit images, videos, or text that is published externally, transparency obligations likely apply. Audit all external-facing content workflows for AI involvement.
Step 5: Classify Remaining as Minimal Risk
Action: All AI tools that do not fall into prohibited, high-risk, or limited risk categories are classified as minimal risk. The EU AI Act imposes no additional obligations on minimal risk AI systems beyond voluntary codes of conduct. For each minimal risk tool, document the classification with a brief rationale explaining why the tool does not meet the criteria for any higher risk category. This negative rationale is important for audit purposes because it demonstrates that the classification was deliberate rather than the result of oversight.
Why this matters: Minimal risk is the most common classification for enterprise AI tools. Productivity assistants, internal search tools, AI-powered analytics, code completion tools, and most internal-facing AI applications typically fall into this category. Documenting the minimal risk classification with rationale protects the organization in two ways. First, it demonstrates to regulators and auditors that you conducted a systematic classification process rather than ignoring tools you did not assess. Second, it provides a baseline for reassessment when tools are updated, when use cases change, or when regulatory guidance evolves. A tool classified as minimal risk today may require reclassification if the vendor adds features that change its risk profile or if the organization changes how it uses the tool.
Tools: Classification documentation template with fields for rationale, assessor, date, and next review date. Inventory management system for tracking classification status across all tools. PolicyGuard records minimal risk classifications with rationale and automatically flags tools for reassessment when vendor updates or usage changes are detected.
Done when: Every AI tool in the inventory has been classified into exactly one risk category, minimal risk tools have documented rationale explaining why higher categories do not apply, and the complete classification inventory has been reviewed for consistency and completeness.
Common mistake: Treating minimal risk as a default bin for tools you did not bother to assess. Every minimal risk classification should include documented rationale showing that the prohibited, high-risk, and limited risk criteria were considered and rejected. A classification without rationale is an unclassified tool, not a minimal risk tool.
Step 6: Document Classification Rationale
Action: For every tool in the inventory, create a classification record that includes the tool name, vendor, and version assessed; the assigned risk category; the specific regulatory provisions considered such as Article 5 paragraphs, Annex III categories, or Article 52 obligations; the rationale for the classification including why higher categories were ruled out; the assessor name and qualifications; the assessment date; the next scheduled reassessment date; and any assumptions that would trigger earlier reassessment if they change. Store all classification records in a centralized repository that is accessible to the governance team, auditable by external parties, and protected against unauthorized modification.
Why this matters: The EU AI Act requires organizations to demonstrate that their classification decisions are systematic and defensible. When a regulator or auditor reviews your AI governance program, they will not simply check whether tools are classified. They will examine how you arrived at each classification. A classification without documented rationale is effectively unverifiable and will be treated as a gap rather than a decision. Comprehensive documentation also protects the organization against changing interpretations. If regulatory guidance evolves and a tool's classification is questioned, contemporaneous documentation of the rationale at the time of assessment demonstrates good faith compliance even if the interpretation later changes.
Tools: Classification record template with all required fields, centralized document management system with access controls and audit logging, and review workflow that ensures records are complete before being finalized. PolicyGuard maintains classification records with full audit trails, version history, and automated completeness checks that prevent records from being finalized without required fields.
Done when: Every tool has a complete classification record with all required fields populated, records are stored in a centralized repository with appropriate access controls, the governance team has reviewed all records for quality and consistency, and reassessment dates are calendared for every tool.
Common mistake: Creating documentation that describes only the classification outcome without explaining the reasoning. A record that says "minimal risk" without explaining why the tool does not qualify as high-risk provides no value in an audit. The rationale is more important than the conclusion.
Step 7: Implement Controls by Risk Category
Action: Based on the completed classification, implement the controls required for each risk category. For high-risk tools, implement the full compliance stack: risk management system, data governance framework, technical documentation, record-keeping and logging, transparency information for users, human oversight mechanisms, and accuracy and robustness monitoring. For limited risk tools, implement the required transparency measures including user disclosure that they are interacting with AI, labeling of AI-generated content, and notification of emotion recognition if applicable. For minimal risk tools, implement your organization's baseline AI governance controls such as policy acknowledgment, approved use documentation, and inclusion in monitoring. Create a control implementation timeline that prioritizes high-risk tools and assigns responsibility for each control to a specific team or individual.
Why this matters: Classification without corresponding controls is a compliance exercise that produces documentation without risk reduction. The entire purpose of the classification process is to determine what controls each tool requires so that those controls can be implemented. High-risk controls are the most resource-intensive and should be started immediately because they require cross-functional coordination between legal, technical, and business teams. Limited risk transparency measures are typically straightforward to implement but require coordination with customer-facing teams. Minimal risk controls should already be in place if your organization has a functioning AI governance program. The control implementation step is where classification translates from a compliance exercise into actual risk management.
Tools: Project management platform for tracking control implementation across all tools and risk categories, compliance management system for documenting control design and operational effectiveness, and monitoring tools to verify that controls operate as designed. PolicyGuard maps classification outcomes to specific control requirements and tracks implementation progress across your entire AI tool inventory.
Done when: Controls appropriate to each risk category have been designed and documented, implementation responsibilities have been assigned with deadlines, high-risk tool controls are either implemented or on a documented implementation timeline, limited risk transparency measures are deployed, and a monitoring process verifies ongoing control effectiveness.
Common mistake: Implementing controls for high-risk tools while neglecting to verify that basic governance controls are operational for minimal risk tools. Auditors and regulators expect to see governance across your entire AI inventory, not just the high-risk subset. A strong high-risk compliance program alongside ungoverned minimal risk tools signals selective rather than systematic governance.
Common Mistakes
- Classifying based on tool type instead of use case. The same AI tool can be minimal risk in one application and high-risk in another. Always assess based on how the tool is actually used in your organization, not on the vendor's general description.
- Performing a one-time classification and treating it as permanent. Tool features change, use cases evolve, and regulatory guidance updates. Schedule reassessment at least annually and whenever a tool is updated or its use case changes.
- Ignoring embedded AI features in existing tools. AI capabilities embedded in CRM, ERP, HR, and other enterprise platforms are still AI systems under the Act. They require classification and appropriate controls just like standalone AI tools.
- Skipping legal review for borderline classifications. Tools that fall near the boundary between risk categories require legal interpretation. Relying solely on technical staff for borderline classifications creates regulatory risk.
Classify Your AI Tools for EU AI Act Compliance
PolicyGuard automates the classification workflow with guided assessments for each risk category, documented rationale templates, and control mapping that turns classifications into actionable compliance plans.
Start free trialPolicyGuard helps companies like yours get AI governance documentation audit-ready in 48 hours or less.
Start free trial →How Long Does Each Step Take?
| Step | Small Inventory (<20 tools) | Large Inventory (20+ tools) |
|---|---|---|
| Build complete AI tool inventory | 1-2 days | 3-5 days |
| Apply prohibited use assessment | 2-4 hours | 1-2 days |
| Test against Annex III high-risk criteria | 1-2 days | 2-4 days |
| Classify limited risk (transparency) | 2-4 hours | 1-2 days |
| Classify remaining as minimal risk | 1-2 hours | 4-8 hours |
| Document classification rationale | 1-2 days | 2-3 days |
| Implement controls by risk category | 2-3 days | 3-5 days |
| Total | 4-7 days | 9-15 days |
Frequently Asked Questions
Does the EU AI Act apply to organizations outside the EU?
Yes, the EU AI Act has extraterritorial reach. It applies to any organization that places AI systems on the EU market, puts AI systems into service in the EU, or uses the output of AI systems within the EU, regardless of where the organization is established. If your AI tools process data from EU residents, generate outputs used by people in the EU, or are deployed in EU member states, the Act likely applies to your organization. The extraterritorial scope is similar to GDPR and means that US, UK, and other non-EU organizations must assess their exposure.
How often should you reassess AI tool risk classifications?
Reassess annually at minimum, and trigger immediate reassessment in four scenarios: when the vendor releases a significant update that changes tool capabilities, when your organization changes how it uses the tool, when regulatory guidance clarifies or changes the interpretation of risk categories, or when the tool starts processing new data types or serving new user populations. Annual reassessment catches gradual changes while trigger-based reassessment catches significant shifts that cannot wait for the next annual review cycle.
What happens if you misclassify an AI tool under the EU AI Act?
Under-classification, classifying a high-risk tool as limited or minimal risk, is the more dangerous error because it results in missing required compliance controls. Penalties for non-compliance with high-risk obligations can reach fifteen million euros or three percent of global annual turnover. Over-classification wastes resources but does not create legal risk. If you discover a misclassification, correct it immediately, implement the required controls, and document the correction with a timeline showing when the error was identified and remediated. Prompt correction demonstrates good faith compliance.
Can an AI tool be classified in multiple risk categories simultaneously?
A single AI tool can have different risk classifications for different use cases. A general-purpose language model might be minimal risk when used for internal document drafting but high-risk when used to generate content that influences employment decisions. Classify based on use case rather than tool, and implement controls corresponding to the highest applicable risk category for each use case. If a tool has multiple use cases across different risk categories, document each use case and its classification separately.
What role does the AI tool vendor play in EU AI Act classification?
Vendors who are providers under the Act have their own classification and compliance obligations for the AI systems they develop and place on the market. However, your organization's obligations as a deployer are independent of the vendor's compliance. You cannot rely on the vendor's risk classification for your own compliance because the risk category may differ based on how you use the tool compared to the vendor's intended use. Request the vendor's conformity documentation and technical information, but conduct your own classification based on your specific use cases and deploy your own controls as required by your risk assessment.
Automate EU AI Act Risk Classification
PolicyGuard guides your team through the complete classification workflow for every AI tool in your inventory. Get documented rationale, control mapping, and reassessment scheduling in a single platform built for EU AI Act compliance.
Start free trial








