The Colorado AI Act, effective February 1, 2026, requires developers and deployers of high-risk AI to conduct impact assessments, provide consumer disclosures, implement risk management policies, and offer opt-out mechanisms.
The law applies to any entity that develops or deploys high-risk AI systems affecting Colorado consumers in consequential decisions such as employment, education, housing, credit, healthcare, and insurance. Deployers with 50 or more Colorado employees or those whose systems affect 100,000 or more Colorado consumers annually must comply. The Colorado Attorney General enforces the law with civil penalties and a 90-day cure period for first violations.
Who This Applies To: Developers and deployers of high-risk AI systems affecting Colorado consumers in consequential decisions (employment, education, housing, credit, healthcare, insurance). Deployers with 50+ Colorado employees OR affecting 100,000+ Colorado consumers annually. Both in-state and out-of-state companies are covered if their AI systems affect Colorado consumers.
The Colorado AI Act (SB 24-205) is the first comprehensive state law in the United States regulating artificial intelligence based on risk classification. Signed into law in May 2024 and effective February 1, 2026, the law creates specific obligations for both developers who build AI systems and deployers who use them in decisions that materially affect consumers. Unlike sector-specific regulations such as NYC Local Law 144 for hiring or Illinois BIPA for biometrics, the Colorado AI Act applies across all consequential decision domains.
This guide breaks down what the law requires, who must comply, the enforcement timeline, penalty structure, and the specific steps companies need to take before the compliance deadline. For a broader view of the AI regulatory landscape, see our 2026 AI regulatory compliance guide. For how to build a governance program that satisfies these requirements, see our AI policy governance guide.
The law distinguishes between two categories of regulated entities. Developers are organizations that create or substantially modify AI systems. Deployers are organizations that use AI systems to make or substantially support consequential decisions about consumers. Many companies will qualify as both if they build internal AI tools that affect consumer-facing decisions.
What the Colorado AI Act Requires
Developer Obligations
Developers of high-risk AI systems must provide deployers with specific documentation before the system is used. This includes a general description of the reasonably foreseeable uses and known limitations of the system, a summary of the types of data used to train the system, known or foreseeable risks of algorithmic discrimination, and a description of the data governance measures applied during development. Developers must also make available documentation sufficient for deployers to complete their own impact assessments and must publish a statement on their website summarizing the types of high-risk AI systems they have developed and how they manage risks of algorithmic discrimination.
Deployer Obligations
Deployers carry the heavier compliance burden. They must implement a risk management policy and program to govern their use of high-risk AI systems. This includes completing an impact assessment for each high-risk AI system before deployment and annually thereafter, or within 90 days of any substantial modification. Deployers must provide consumers with a notice that an AI system is being used to make or substantially support a consequential decision. The notice must include a description of the system's purpose, the types of data processed, and how the consumer can contest the decision or request a human review. Deployers must also provide consumers with an opportunity to opt out of the AI system where technically feasible and to appeal adverse decisions.
What Makes an AI System High-Risk
An AI system is classified as high-risk if it makes or is a substantial factor in making a consequential decision about a consumer. Consequential decisions are those with material legal or similarly significant effects in the domains of employment and employment-related decisions, education enrollment and opportunity, access to financial or lending services, access to essential government services, access to healthcare services and coverage, housing availability and terms, insurance underwriting and pricing, and access to legal services.
Consumer Rights
Colorado consumers affected by high-risk AI decisions have the right to receive notice that AI is being used, the right to a description of how the AI system contributed to the decision, the right to opt out of AI-based processing where technically feasible, and the right to appeal adverse decisions through a human review process. These rights apply regardless of whether the consumer has a direct contractual relationship with the deployer.
Key Dates and Enforcement Timeline
| Date | Requirement | Who | Status |
|---|---|---|---|
| May 17, 2024 | Colorado AI Act signed into law | All regulated entities | Complete |
| February 1, 2026 | Law takes effect; all obligations enforceable | Developers and deployers | Active |
| February 1, 2026 | Developers must provide system documentation to deployers | Developers | Active |
| February 1, 2026 | Deployers must have risk management policies in place | Deployers | Active |
| February 1, 2026 | Consumer disclosure and opt-out mechanisms must be operational | Deployers | Active |
| May 1, 2026 | First annual impact assessments due (for systems deployed on effective date) | Deployers | Upcoming |
| February 1, 2027 | AG enforcement review and potential rulemaking | Colorado AG | Upcoming |
Penalties for Non-Compliance
The Colorado AI Act is enforced exclusively by the Colorado Attorney General. There is no private right of action, meaning individual consumers cannot sue companies directly under this law. However, the enforcement structure still carries meaningful risk for non-compliant organizations.
The Attorney General can bring enforcement actions under the Colorado Consumer Protection Act, which provides for civil penalties of up to $20,000 per violation. Each affected consumer can constitute a separate violation, meaning penalties can scale rapidly for companies processing large volumes of consumer decisions. The AG can also seek injunctive relief requiring companies to stop using non-compliant AI systems.
The law includes a 90-day cure period. If the Attorney General identifies a violation and notifies the company, the company has 90 days to cure the violation and provide written notice of the cure to the AG. If the company cures the violation within the 90-day window, no penalty is imposed for that specific violation. This cure period applies only to first violations and does not protect against willful or repeated non-compliance. Companies should note that the cure period is not guaranteed to remain in the statute. The legislature included a provision for reviewing the cure period and may amend or eliminate it in future sessions.
Compliance Checklist
- ☐ Inventory all AI systems used in consequential decisions affecting Colorado consumers and classify each as high-risk or standard
- ☐ Complete an impact assessment for each high-risk AI system documenting purpose, data inputs, discrimination risks, and mitigation measures
- ☐ Implement a written risk management policy covering AI system selection, deployment, monitoring, and retirement
- ☐ Build consumer disclosure mechanisms providing clear notice of AI involvement, system purpose, and data types used
- ☐ Establish opt-out processes allowing consumers to request decisions made without AI where technically feasible
- ☐ Create an appeal and human review process for consumers who receive adverse AI-assisted decisions
- ☐ If a developer, provide deployers with complete system documentation including training data summaries and known risk disclosures
- ☐ Schedule annual impact assessment reviews and assign internal owners for each high-risk AI system
Getting compliant requires cross-functional coordination between legal, engineering, compliance, and product teams. Organizations that start with the checklist above and build repeatable processes around each item will be in the strongest position. If you need help mapping these requirements to your specific AI systems, contact PolicyGuard for a compliance assessment.
PolicyGuard helps companies like yours get AI governance documentation audit-ready in 48 hours or less.
Start free trial →How PolicyGuard Helps
PolicyGuard is purpose-built to help organizations comply with state AI regulations including the Colorado AI Act. Here is how the platform maps to the law's specific requirements:
- AI System Inventory and Classification: PolicyGuard automatically discovers AI tools in use across your organization and classifies them against the Colorado AI Act's high-risk criteria. Instead of manually cataloging systems, your compliance team gets a continuously updated inventory with risk classifications that map directly to the law's consequential decision categories.
- Impact Assessment Templates: PolicyGuard includes pre-built impact assessment templates aligned to the Colorado AI Act's requirements. Each template walks your team through the required elements including purpose documentation, data input analysis, algorithmic discrimination risk evaluation, and mitigation planning. Completed assessments are stored with full version history for audit readiness.
- Consumer Disclosure Management: PolicyGuard generates consumer disclosure language based on your AI system configurations and deployment contexts. The platform tracks which disclosures are active, monitors for required updates when systems change, and provides audit trails showing when disclosures were published and modified.
- Continuous Monitoring and Alerts: PolicyGuard monitors your AI systems against Colorado AI Act requirements on an ongoing basis. When a new AI tool is adopted, a system is modified, or an impact assessment is approaching its annual review date, the platform alerts the responsible team members and creates tracked action items.
- Audit-Ready Documentation: Every action taken in PolicyGuard is logged with timestamps, user attribution, and version history. When the Colorado Attorney General or an internal auditor requests evidence of compliance, PolicyGuard generates complete evidence packages showing your risk management program, impact assessments, consumer disclosures, and monitoring activities.
FAQ
Does the Colorado AI Act apply to companies outside Colorado?
Yes. The law applies to any developer or deployer whose high-risk AI systems affect Colorado consumers, regardless of where the company is headquartered. If your AI system makes or substantially supports consequential decisions about people in Colorado, you are subject to the law. This extraterritorial reach mirrors how state privacy laws like the CCPA apply to out-of-state companies processing California consumer data.
What counts as a consequential decision under the Colorado AI Act?
A consequential decision is one that has a material legal or similarly significant effect on a consumer in employment, education, financial services, housing, healthcare, insurance, or access to essential government or legal services. The law specifically targets decisions where AI is a substantial factor in the outcome. Routine operational uses of AI such as spam filtering or website personalization generally do not qualify as consequential decisions unless they materially affect a consumer's access to services in the listed categories.
What is the difference between a developer and a deployer under the law?
A developer is an entity that creates, codes, or substantially modifies an AI system. A deployer is an entity that uses an AI system to make or support consequential decisions about consumers. Many companies will be both. For example, a company that builds an internal AI hiring tool is the developer of that tool and also the deployer when it uses the tool to screen candidates. Each role carries distinct obligations: developers must provide documentation and transparency, while deployers must conduct impact assessments, provide consumer notices, and offer opt-out mechanisms.
How often do impact assessments need to be updated?
Deployers must complete an impact assessment before deploying a high-risk AI system, update it annually, and update it within 90 days of any substantial modification to the system. A substantial modification includes changes to the data inputs, decision logic, or the categories of consumers affected. Organizations should build annual review cycles into their compliance calendars and establish change management processes that trigger assessment updates when AI systems are modified.
Can consumers sue companies under the Colorado AI Act?
No. The Colorado AI Act does not include a private right of action. Only the Colorado Attorney General can bring enforcement actions under the law. However, consumers harmed by discriminatory AI decisions may still have claims under other state and federal anti-discrimination laws. The absence of a private right of action makes AG enforcement the primary risk, and companies should monitor the AG's public statements and enforcement priorities to calibrate their compliance efforts accordingly.
The Colorado AI Act represents a significant shift in how US states regulate artificial intelligence. Companies that take compliance seriously now will be better positioned as other states follow Colorado's lead. Talk to PolicyGuard about how we can help you build a compliance program that meets Colorado's requirements and scales to cover the broader AI regulatory landscape.









