California AI Laws in 2026: What Businesses Must Comply With

P
PolicyGuard Team
10 min read
California AI Laws in 2026: What Businesses Must Comply With - PolicyGuard AI

California has multiple AI laws: AI Transparency Act requiring AI content disclosure, AB 2013 requiring training data transparency, and CPPA regulations expanding automated decision-making rights under CCPA.

Any company using AI with California customers, processing California consumer data with AI, generating AI content for California audiences, or employing California workers in AI roles must comply with these overlapping requirements. The CPPA can impose fines of $7,500 per intentional violation, and the California Attorney General retains independent enforcement authority.

Who This Applies To: Any company using AI with California customers, processing California consumer data with AI, generating AI content for California audiences, or employing California workers in AI roles. California's broad jurisdictional reach means most US companies with a digital presence serving California consumers are likely covered by at least one of these laws.

California does not have a single comprehensive AI statute. Instead, the state has enacted multiple overlapping laws and regulations that collectively create one of the most demanding AI compliance environments in the United States. Companies operating in California or serving California consumers must navigate the AI Transparency Act (SB 942), the training data transparency requirements of AB 2013, the California Privacy Protection Agency's automated decision-making technology regulations, and AI-related provisions within the CCPA as amended by the CPRA.

This fragmented approach creates real compliance challenges because each law has different applicability thresholds, different enforcement agencies, and different compliance timelines. This guide maps out what each law requires, who enforces it, and what businesses need to do. For the broader national picture, see our 2026 AI regulatory compliance guide. For how Colorado's approach compares, see our Colorado AI Act guide.

The practical effect of California's multi-law approach is that companies often need to comply with several requirements simultaneously. A company deploying an AI chatbot for California customers, for example, must consider SB 942's disclosure rules, the CPPA's automated decision-making provisions if the chatbot influences consumer-affecting decisions, and AB 2013's training data transparency requirements if the company also developed the underlying model.

What California AI Laws Require

AI Transparency Act (SB 942)

SB 942, effective January 1, 2026, requires providers of generative AI systems to provide AI detection tools and content provenance information. Covered providers must make available at no cost an AI detection tool that allows users to assess whether content was generated by the provider's system. Providers must also include provenance data, such as metadata or watermarks, in AI-generated content where technically feasible. The law targets providers of generative AI systems with over one million monthly users, though the disclosure requirements apply broadly to AI-generated content reaching California audiences. SB 942 also prohibits removing or disabling provenance information from AI-generated content and requires providers to maintain public documentation about their detection and provenance methods.

Training Data Transparency (AB 2013)

AB 2013, effective January 1, 2026, requires developers of AI systems and services to post on their website a high-level summary of the datasets used to train their AI. This includes a description of the sources, the types of data in each dataset (such as text, images, or audio), whether datasets include personal information, and whether the data was purchased, licensed, scraped, or otherwise obtained. The law applies to developers making AI systems or services available in California. The intent is to create transparency around training data practices, which enables downstream deployers and consumers to understand the foundation of AI outputs.

CPPA Automated Decision-Making Technology Regulations

The California Privacy Protection Agency has finalized regulations under the CCPA/CPRA that expand consumer rights related to automated decision-making technology (ADMT). These regulations require businesses to provide consumers with pre-use notice before using ADMT to make significant decisions, including the logic of the ADMT, the consumer's right to opt out, and how to exercise that right. Consumers gain the right to opt out of ADMT used for significant decisions about employment, healthcare, housing, education, and financial services. Businesses must also provide access to the results of ADMT processing upon consumer request and must conduct regular risk assessments for ADMT used in significant decisions.

CCPA AI Provisions

The CCPA as amended by the CPRA already includes provisions relevant to AI. These include the right to know about automated decision-making and the logic involved, the right to opt out of the sale or sharing of personal information used for AI training, data minimization requirements that limit AI systems to processing only personal information reasonably necessary for the disclosed purpose, and purpose limitation requirements that restrict repurposing consumer data for AI training without additional notice. Businesses must treat inferences drawn by AI systems as personal information subject to all CCPA rights including access, deletion, and correction.

Key Dates and Enforcement Timeline

DateRequirementWhoStatus
January 1, 2023CPRA amendments to CCPA take effect including AI-related provisionsCovered businessesActive
January 1, 2026AI Transparency Act (SB 942) takes effectGenerative AI providersActive
January 1, 2026AB 2013 training data transparency requirements take effectAI developersActive
Q1 2026CPPA automated decision-making regulations finalized and enforceableCovered businesses using ADMTActive
July 1, 2026CPPA enforcement actions for ADMT violations expected to beginCovered businessesUpcoming
2026 OngoingCalifornia AG retains concurrent enforcement authority for CCPA violationsAll covered entitiesActive

Penalties for Non-Compliance

California AI law enforcement involves multiple agencies with overlapping jurisdiction. The California Privacy Protection Agency (CPPA) can impose administrative fines of $2,500 per unintentional violation and $7,500 per intentional violation of the CCPA and its implementing regulations, including the ADMT rules. Given the volume of consumer data most companies process, per-violation penalties can accumulate to millions of dollars.

The California Attorney General retains independent enforcement authority under the CCPA and can seek civil penalties at the same per-violation rates. The AG can also bring enforcement actions under the California Unfair Competition Law and False Advertising Law for deceptive AI practices, which carry additional penalties. For SB 942 violations, the AG has enforcement authority with penalties determined under existing consumer protection frameworks.

The CCPA provides a limited private right of action for data breaches resulting from a business's failure to maintain reasonable security, which can apply when AI systems inadequately protect personal information. Consumers can seek statutory damages of $100 to $750 per consumer per incident or actual damages, whichever is greater, in data breach cases. While the private right of action does not extend to all CCPA violations, it creates meaningful litigation risk for companies whose AI systems suffer security failures involving California consumer data.

Companies should also note that CPPA enforcement actions are public, creating reputational risk beyond the financial penalties. The agency has signaled that AI-related enforcement is a priority area, and early enforcement actions are likely to target companies with visible consumer-facing AI deployments that lack required disclosures.

Compliance Checklist

  • ☐ Inventory all AI systems processing California consumer data and classify by applicable law (SB 942, AB 2013, CCPA/CPRA, CPPA ADMT rules)
  • ☐ Implement AI content disclosure and provenance mechanisms for generative AI systems as required by SB 942
  • ☐ Publish training data transparency documentation on your website as required by AB 2013
  • ☐ Build pre-use notice mechanisms for automated decision-making technology used in significant consumer decisions
  • ☐ Implement consumer opt-out workflows for ADMT processing in employment, healthcare, housing, education, and financial decisions
  • ☐ Ensure AI-generated inferences are treated as personal information subject to CCPA access, deletion, and correction rights
  • ☐ Conduct and document risk assessments for each ADMT system used in significant decisions
  • ☐ Update privacy policies to include AI-specific disclosures covering all applicable California requirements
  • ☐ Establish data minimization and purpose limitation controls for personal information used in AI training and inference
  • ☐ Create a cross-functional compliance calendar tracking annual assessment deadlines, policy review dates, and regulatory update monitoring

California's overlapping requirements make a unified compliance approach essential. Organizations that address each law in isolation will duplicate effort and risk gaps. Contact PolicyGuard to see how a single platform can map your AI systems to all applicable California requirements.

PolicyGuard helps companies like yours get AI governance documentation audit-ready in 48 hours or less.

Start free trial →

How PolicyGuard Helps

PolicyGuard helps organizations navigate California's complex AI regulatory environment by mapping each requirement to your specific AI systems and compliance obligations:

  • Multi-Law Compliance Mapping: PolicyGuard maps your AI systems against all applicable California laws simultaneously, identifying which systems trigger SB 942 disclosure requirements, AB 2013 training data transparency, CPPA ADMT rules, and CCPA provisions. This eliminates the risk of addressing one law while missing obligations under another.
  • Automated Disclosure Management: PolicyGuard tracks and manages the disclosure requirements across California's AI laws, from SB 942 content provenance to CPPA pre-use notices. The platform generates required disclosure language, monitors for changes that require disclosure updates, and maintains audit trails of all disclosures.
  • ADMT Risk Assessment Workflow: PolicyGuard provides structured risk assessment templates aligned to the CPPA's ADMT requirements. Assessment workflows route through the right stakeholders, capture required information, and store completed assessments with version history for regulatory review.
  • Consumer Rights Request Management: PolicyGuard integrates AI-specific consumer rights into your existing CCPA/CPRA request workflow. When a consumer exercises their right to opt out of ADMT, access their AI-generated inferences, or request deletion of AI-processed data, PolicyGuard routes the request, tracks fulfillment, and documents compliance.
  • Regulatory Change Monitoring: California's AI legal landscape is evolving rapidly. PolicyGuard monitors proposed legislation, CPPA rulemaking, and AG guidance to alert your team when new requirements affect your AI systems, giving you lead time to update compliance programs before enforcement begins.

FAQ

Do all California AI laws apply to every business?

No. Each law has different applicability thresholds. SB 942 primarily targets generative AI providers with over one million monthly users. AB 2013 applies to developers making AI systems available in California. The CCPA/CPRA and CPPA regulations apply to businesses meeting CCPA thresholds: annual gross revenue over $25 million, buying or selling personal information of 100,000 or more California consumers, or deriving 50 percent or more of revenue from selling personal information. However, the broad reach of California's consumer protection laws means most companies with significant California customer bases will be covered by at least one requirement.

How do the CPPA's ADMT rules differ from the CCPA's existing provisions?

The CCPA as amended by CPRA established baseline rights around automated decision-making. The CPPA's ADMT regulations significantly expand these rights by requiring pre-use notice before ADMT is used in significant decisions, creating specific opt-out rights for ADMT in defined categories, mandating regular risk assessments, and requiring businesses to provide access to the results of ADMT processing. The ADMT rules add operational requirements that go beyond the CCPA's general disclosure framework.

What happens if I comply with one California AI law but miss another?

Each law is enforced independently. Complying with SB 942 does not satisfy CCPA requirements, and complying with CCPA does not satisfy AB 2013. Each enforcement agency evaluates compliance against its own law. A company could face simultaneous enforcement actions from the CPPA for ADMT violations and the AG for SB 942 violations. This is why a unified compliance approach that maps all applicable laws to your AI systems is critical.

Does AB 2013 require disclosing proprietary training methods?

AB 2013 requires a high-level summary of training datasets, not disclosure of proprietary algorithms, model architectures, or trade secrets. The required disclosures focus on the types and sources of training data, whether personal information was included, and how the data was obtained. Companies can provide meaningful transparency about their data practices without revealing competitive technical details.

Are there California AI laws specifically addressing employment AI?

While California does not yet have a standalone AI-in-hiring law like NYC Local Law 144, the CPPA's ADMT regulations include employment as a category of significant decisions requiring pre-use notice and opt-out rights. California's existing FEHA anti-discrimination laws also apply to AI-assisted employment decisions. Several proposed California bills would create additional employer obligations for AI in hiring, performance evaluation, and workforce management, and companies should monitor the legislative calendar for new requirements. For guidance on AI in employment contexts, see our AI policy governance guide.

California's AI regulatory environment will continue to evolve as the CPPA issues new guidance and the legislature considers additional bills. Companies that build flexible compliance programs now will be best positioned to absorb new requirements. Talk to PolicyGuard about building a California AI compliance program that scales with the regulatory landscape.

AI RegulationsAI ComplianceEnterprise AI

Frequently Asked Questions

Do all California AI laws apply to every business?+
No. Each law has different applicability thresholds. SB 942 primarily targets generative AI providers with over one million monthly users. AB 2013 applies to developers making AI systems available in California. The CCPA/CPRA and CPPA regulations apply to businesses meeting CCPA thresholds: annual gross revenue over $25 million, buying or selling personal information of 100,000 or more California consumers, or deriving 50 percent or more of revenue from selling personal information. However, the broad reach of California's consumer protection laws means most companies with significant California customer bases will be covered by at least one requirement.
How do the CPPA's ADMT rules differ from the CCPA's existing provisions?+
The CCPA as amended by CPRA established baseline rights around automated decision-making. The CPPA's ADMT regulations significantly expand these rights by requiring pre-use notice before ADMT is used in significant decisions, creating specific opt-out rights for ADMT in defined categories, mandating regular risk assessments, and requiring businesses to provide access to the results of ADMT processing. The ADMT rules add operational requirements that go beyond the CCPA's general disclosure framework.
What happens if I comply with one California AI law but miss another?+
Each law is enforced independently. Complying with SB 942 does not satisfy CCPA requirements, and complying with CCPA does not satisfy AB 2013. Each enforcement agency evaluates compliance against its own law. A company could face simultaneous enforcement actions from the CPPA for ADMT violations and the AG for SB 942 violations. This is why a unified compliance approach that maps all applicable laws to your AI systems is critical.
Does AB 2013 require disclosing proprietary training methods?+
AB 2013 requires a high-level summary of training datasets, not disclosure of proprietary algorithms, model architectures, or trade secrets. The required disclosures focus on the types and sources of training data, whether personal information was included, and how the data was obtained. Companies can provide meaningful transparency about their data practices without revealing competitive technical details.
Are there California AI laws specifically addressing employment AI?+
While California does not yet have a standalone AI-in-hiring law like NYC Local Law 144, the CPPA's ADMT regulations include employment as a category of significant decisions requiring pre-use notice and opt-out rights. California's existing FEHA anti-discrimination laws also apply to AI-assisted employment decisions. Several proposed California bills would create additional employer obligations for AI in hiring, performance evaluation, and workforce management, and companies should monitor the legislative calendar for new requirements. For guidance on AI in employment contexts, see our AI policy governance guide . California's AI regulatory environment will continue to evolve as the CPPA issues new guidance and the legislature considers additional bills. Companies that build flexible compliance programs now will be best positioned to absorb new requirements. Talk to PolicyGuard about building a California AI compliance program that scales with the regulatory landscape.

PolicyGuard Team

PolicyGuard

Building PolicyGuard AI — the compliance layer for enterprise AI governance.

Continue Reading

Ready to get AI governance sorted?

Join companies using PolicyGuard to enforce AI policies and generate audit-ready documentation.

Ready to govern every AI tool your team uses?

One platform to enforce policies, track compliance, and prove governance across 80+ AI tools.

Book a demo