AI Governance for Insurance: Underwriting, Claims, and Regulatory Risk

P
PolicyGuard Team
10 min read
AI Governance for Insurance: Underwriting, Claims, and Regulatory Risk - PolicyGuard AI

Insurance companies using AI in underwriting, claims, or fraud detection must comply with NAIC model bulletins on AI, state insurance department guidance, and fair insurance practices laws prohibiting discriminatory AI outcomes.

The insurance industry is among the most heavily regulated sectors adopting AI, with state-level insurance commissioners increasingly scrutinizing algorithmic decision-making in pricing, underwriting, and claims. A robust ai governance insurance program must address both traditional insurance regulations and emerging AI-specific requirements across all jurisdictions where the carrier operates.

Why AI Governance Is Different for Insurance

Insurance is fundamentally a business of risk assessment and prediction, making it a natural fit for AI and machine learning. But this alignment also creates heightened governance challenges that distinguish insurance from other industries.

State-by-state regulatory fragmentation is the most significant complicating factor. Unlike banking (regulated primarily at the federal level), insurance regulation occurs across 50 states plus territories, each with its own insurance department, regulations, and enforcement priorities. An AI model approved in one state may face scrutiny or prohibition in another, requiring governance programs that can adapt to jurisdictional variation.

Unfair discrimination laws carry unique weight in insurance. Every state prohibits unfair discrimination in insurance practices, and AI systems that produce disparate impacts on protected classes can trigger enforcement actions even without discriminatory intent. The challenge is particularly acute because many data features that improve predictive accuracy are correlated with protected characteristics like race, income, or geography.

Actuarial standards create additional accountability layers. Actuaries who rely on AI models must comply with Actuarial Standards of Practice (ASOPs), including the recently updated ASOP on modeling, which requires documentation of model limitations, assumptions, and validation results. This creates a professional accountability framework that overlays corporate governance.

Finally, insurance AI decisions directly affect consumers' access to essential financial protection. Errors in AI-driven underwriting or claims processing can leave individuals without coverage when they need it most, elevating the stakes of governance failures beyond financial penalties to genuine consumer harm.

The Top AI Risks in Insurance

Insurance carriers face a specific constellation of AI risks shaped by the industry's regulatory environment, data intensity, and consumer impact. The following matrix identifies priority risks for governance planning.

RiskLikelihoodImpactMitigation
Unfair discrimination in AI-driven underwriting or pricingHighHighConduct disparate impact testing across protected classes; implement bias monitoring dashboards; document fairness validation
Non-compliance with NAIC model bulletins on AIHighHighMap NAIC requirements to internal controls; maintain AI inventory with regulatory classification; conduct annual compliance reviews
Opaque AI models failing regulatory explainability requirementsHighHighRequire model documentation and explainability reports; use interpretable models for regulated decisions; maintain adverse action explanation capabilities
Claims automation errors causing wrongful denialsMediumHighImplement human review thresholds for claim denials; maintain override audit trails; conduct regular claims accuracy reviews
Third-party model vendor riskMediumHighRequire vendor model documentation; conduct independent validation; include AI governance terms in vendor contracts
Shadow AI use by agents and adjustersHighMediumEstablish approved tool lists; implement network monitoring; provide approved AI alternatives for common tasks
Data quality degradation affecting model performanceMediumMediumImplement data quality monitoring; establish drift detection thresholds; schedule regular model revalidation
Fraud detection false positives causing customer harmMediumMediumSet false positive thresholds; require human review before fraud flags trigger adverse actions; track and report false positive rates

Carriers should map these risks against their specific AI use cases and jurisdictional exposure to develop a prioritized governance roadmap. Organizations with extensive multi-state operations will need to weight regulatory compliance risks more heavily.

What Regulators Expect

Insurance regulators have been among the most proactive in establishing AI governance expectations. The regulatory landscape is anchored by several key frameworks.

NAIC Model Bulletin on AI (2023, updated 2025). The National Association of Insurance Commissioners issued a model bulletin that has been adopted or adapted by a growing number of states. It requires insurers to establish AI governance frameworks, conduct impact assessments for AI systems affecting consumers, ensure human oversight of material AI decisions, maintain documentation sufficient to explain AI outcomes, and test for unfair discrimination. While not legally binding until adopted by individual states, it represents the consensus regulatory expectation.

Colorado AI Governance Regulation (SB 21-169). Colorado was the first state to enact comprehensive AI governance requirements specifically for insurance, requiring insurers to implement risk management frameworks for AI, conduct testing to identify unfair discrimination, provide regulatory filings describing AI governance practices, and maintain records of AI system performance and outcomes. Other states are following Colorado's lead with similar requirements.

State unfair trade practices acts in every jurisdiction prohibit unfair discrimination in insurance, and regulators are increasingly interpreting these statutes to cover AI-driven decisions. Market conduct examinations now routinely include questions about AI governance, model validation, and fairness testing.

Rate filing requirements in many states require insurers to justify rating factors, including those derived from AI models. Regulators may reject rate filings that rely on opaque AI models without adequate documentation of the model's methodology and fairness testing results.

Carriers should monitor the regulatory landscape continuously, as new state-level requirements are emerging regularly. Building a governance program aligned with the NAIC model bulletin and Colorado's requirements provides a strong baseline that can be adapted to additional jurisdictions.

AI Governance Built for Insurance Teams

PolicyGuard helps insurance organizations enforce AI policies, detect shadow AI, and generate audit documentation.

Start free trial

PolicyGuard helps companies like yours get AI governance documentation audit-ready in 48 hours or less.

Start free trial →

Building an AI Policy for Insurance

An effective AI governance policy for insurance must bridge the gap between enterprise technology management and actuarial and regulatory compliance traditions. The policy framework should be organized around functional areas that align with how insurance organizations operate.

Underwriting AI Governance. Policies for AI in underwriting should require model documentation meeting actuarial standards, mandate disparate impact testing before deployment and on an ongoing basis, define the scope of automated underwriting decisions versus those requiring human review, establish model validation schedules and criteria, and require adverse action explanations that can be provided to applicants. This connects directly to your broader AI governance framework while addressing insurance-specific requirements.

Claims AI Governance. Claims processing AI requires policies addressing automation thresholds (which claims can be auto-adjudicated and which require human involvement), denial review requirements, fraud flag escalation procedures, claimant communication standards, and audit trail maintenance. Given the consumer impact of claims decisions, governance controls should be calibrated to the severity and complexity of claims.

Pricing and Rating AI Governance. AI used in pricing requires governance that addresses rate filing documentation, regulatory explainability requirements, competitive fairness considerations, and geographic variation in requirements. Many states require that every rating factor be actuarially justified, which imposes documentation burdens on AI-derived rating variables.

Distribution and Marketing AI Governance. AI used for customer acquisition, cross-selling, or agent management requires policies addressing marketing compliance, fair lending and insurance practices in lead scoring, and data privacy in customer analytics. These use cases often receive less governance attention but can create significant regulatory exposure.

Building on your risk assessment framework, each AI system should be classified by risk tier based on its regulatory exposure, consumer impact, and decision authority, with governance controls scaled accordingly.

How to Monitor and Enforce AI Governance in Insurance

Effective monitoring in insurance requires integration with existing compliance and actuarial oversight functions rather than creating entirely new structures.

Model Risk Management Integration. Insurance carriers typically have existing model risk management (MRM) frameworks. AI governance should extend these frameworks to cover AI and machine learning models, applying the same principles of independent validation, ongoing monitoring, and lifecycle management. Key metrics include model performance drift, prediction accuracy over time, false positive and negative rates, and outcome distribution across protected classes.

Regulatory Compliance Monitoring. Establish a regulatory tracking function that monitors AI-related regulatory developments across all operating jurisdictions. Map regulatory requirements to internal controls and validate compliance quarterly. Maintain a regulatory correspondence file documenting all AI-related regulator inquiries and responses.

Fairness and Bias Monitoring. Implement continuous monitoring of AI outcomes across protected classes. Use statistical testing methodologies appropriate to insurance contexts, such as disparate impact ratios for underwriting acceptance rates, claims denial rates across demographic segments, and pricing outcome distributions. Set alert thresholds that trigger review and remediation when disparities exceed acceptable bounds.

Audit Trail and Documentation. Insurance regulators expect comprehensive documentation. Maintain records of all AI model development, validation, deployment decisions, and ongoing performance monitoring. These records should be organized to facilitate market conduct examinations and regulatory inquiries. Consider your documentation a regulatory asset: well-organized records that demonstrate governance diligence can significantly reduce examination friction.

Incident Response. Establish clear procedures for AI governance incidents, including model failures, bias discoveries, regulatory inquiries, and consumer complaints related to AI decisions. Define escalation paths, notification requirements (internal and regulatory), and remediation timelines.

Frequently Asked Questions

Does the NAIC model bulletin apply to all insurers?

The NAIC model bulletin is not directly enforceable. It becomes binding only when individual state insurance departments adopt it through regulation, bulletin, or guidance. However, it represents the consensus expectation of state regulators, and many states have adopted its principles even without formally adopting the bulletin. Insurers should treat it as a minimum governance standard regardless of whether their specific states have formally adopted it, as adoption is expanding and building governance retroactively is far more costly than building it proactively.

How should insurers handle third-party AI models from vendors?

Insurers remain responsible for the regulatory compliance of AI models used in their operations, regardless of whether those models were developed in-house or purchased from vendors. Governance requirements include obtaining sufficient model documentation from vendors to satisfy regulatory and actuarial standards, conducting independent validation of vendor models, including AI governance requirements in vendor contracts (covering documentation, updates, fairness testing, and audit rights), and monitoring vendor model performance in your specific book of business. The principle is clear: you cannot outsource regulatory accountability through vendor relationships.

What constitutes unfair discrimination in AI-driven insurance decisions?

Unfair discrimination in insurance is distinct from discrimination in other contexts. Insurance inherently discriminates based on risk, which is legally permissible. Unfair discrimination occurs when distinctions are not actuarially justified, are based on protected characteristics, or produce disparate impacts on protected classes without legitimate actuarial justification. AI governance programs should test for both intentional discrimination (examining input features) and disparate impact (examining outcome distributions), documenting the actuarial justification for any features that correlate with protected characteristics.

How do explainability requirements affect AI model selection in insurance?

Regulatory explainability requirements significantly influence model architecture decisions. For regulated decisions like underwriting and pricing, many carriers prefer interpretable models (such as generalized linear models or gradient-boosted trees with limited depth) over deep learning approaches. When more complex models are used, governance programs should require post-hoc explainability methods, individual decision explanations for adverse actions, and documentation of model behavior sufficient for regulatory review. The trend is toward greater explainability requirements, so investing in interpretable AI approaches is a strategic governance decision.

Should insurance carriers appoint a chief AI officer or dedicated AI governance role?

The appropriate organizational structure depends on the carrier's size and AI maturity. Large carriers with extensive AI deployments benefit from a dedicated AI governance role (whether titled Chief AI Officer, Head of AI Risk, or similar) that coordinates across underwriting, claims, actuarial, compliance, and IT functions. Mid-sized carriers may assign AI governance responsibility to an existing role (such as the Chief Risk Officer or Chief Actuary) supported by a cross-functional AI governance committee. The key requirement is clear accountability: someone must own AI governance outcomes and have authority to enforce standards across the organization.

AI GovernanceAI ComplianceEnterprise AI

Frequently Asked Questions

What AI regulations apply to insurance companies?+
Insurance companies face a growing web of AI regulations. The NAIC has issued a model bulletin on AI governance that many states are adopting. Colorado's AI Act specifically targets insurance as a high-risk industry requiring algorithmic impact assessments. State unfair trade practice acts apply to AI-driven underwriting, pricing, and claims decisions. The Fair Credit Reporting Act governs AI systems that use consumer report data. Anti-discrimination laws prohibit AI that results in unfair discrimination based on protected characteristics, even through proxy variables. Several states have introduced or passed legislation requiring transparency in AI insurance decisioning, and federal regulators are increasingly scrutinizing AI in insurance.
What is the NAIC model bulletin on AI?+
The NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers provides a regulatory framework for AI governance in insurance. It establishes expectations that insurers develop, implement, and maintain a written AI governance program. Key requirements include board or senior management oversight of AI, a comprehensive inventory of AI systems, risk-based testing and validation procedures, ongoing monitoring for unfair discrimination, and documentation of AI decision-making processes. The bulletin applies to all AI uses across the insurance lifecycle including marketing, underwriting, pricing, claims, and fraud detection. States are adopting the bulletin with varying modifications to their existing regulatory frameworks.
How do you test insurance AI for proxy discrimination?+
Testing for proxy discrimination requires analyzing whether facially neutral variables in your AI models correlate with and effectively stand in for protected characteristics. Start by conducting correlation analysis between model input variables and protected classes like race, ethnicity, gender, and religion. Use techniques like disparate impact testing to measure whether outcomes differ significantly across protected groups. Employ model interpretability tools like SHAP values to understand which features drive decisions and whether they serve as proxies. Test with counterfactual analysis by changing protected characteristics while holding other variables constant. Document all testing methodology, results, and remediation actions for regulatory examination readiness.
What documentation do insurance regulators want for AI programs?+
Insurance regulators expect comprehensive AI documentation including a written AI governance framework with board-approved policies, a complete inventory of all AI systems with risk classifications, documentation of model development including training data sources and validation results, bias testing methodology and outcomes, ongoing monitoring reports showing model performance and fairness metrics, incident logs for AI-related errors or complaints, third-party vendor assessments for outsourced AI services, and records of human oversight and intervention in AI-driven decisions. Regulators also want evidence of staff training programs and clear escalation procedures. Maintaining this documentation proactively demonstrates good faith compliance and streamlines market conduct examinations.
Does AI in claims processing create unfair claims practice liability?+
Yes, AI in claims processing can create significant unfair claims practice liability. State unfair claims settlement practices acts require insurers to conduct reasonable investigations, provide timely claim decisions, and offer fair settlements. If an AI system systematically undervalues claims, denies legitimate claims based on flawed pattern recognition, or processes claims without adequate human review, the insurer faces regulatory action and private litigation. Courts and regulators hold insurers responsible for AI outcomes regardless of whether a human or algorithm made the decision. Insurers should implement human-in-the-loop review for claim denials and significant valuation decisions, and regularly audit AI claims outcomes for systematic bias.

PolicyGuard Team

PolicyGuard

Building PolicyGuard AI — the compliance layer for enterprise AI governance.

Continue Reading

Ready to get AI governance sorted?

Join companies using PolicyGuard to enforce AI policies and generate audit-ready documentation.

Ready to govern every AI tool your team uses?

One platform to enforce policies, track compliance, and prove governance across 80+ AI tools.

Book a demo