How to Write an AI Policy: Step-by-Step Guide for 2026

P
PolicyGuard Team
17 min read
How to Write an AI Policy: Step-by-Step Guide for 2026 - PolicyGuard AI

Writing an AI policy requires 8 steps: audit current AI usage, define scope, classify tools by risk, draft the 12 core sections, conduct legal review, get leadership approval, deploy with acknowledgment tracking, and set up enforcement and annual review.

The process starts with understanding what AI tools employees already use, then moves through risk classification, drafting, legal and leadership sign-off, and finally deployment with tracking mechanisms that produce audit-ready evidence.

Every organization using AI tools needs a written AI policy. Without one, employees make their own decisions about which tools to use, what data to share, and how to apply AI outputs. That leads to inconsistent practices, data exposure, and audit failures. Writing an AI policy is how you replace informal habits with documented, enforceable standards.

This guide is for compliance officers, IT leaders, legal teams, and operations managers who need to create an AI policy from scratch or replace an informal one. By the end, you will have a complete, enforceable AI policy that covers all 12 sections auditors expect to see, with acknowledgment tracking and a review schedule. No prior policy-writing experience is required, but you should have access to your organization's current AI tool inventory and at least one stakeholder from legal.

If you already have a policy and need a template to compare against, see our AI acceptable use policy template. If you are looking for guidance on communicating the finished policy to employees, see our guide on AI policy for employees.

Before You Start

Before beginning the policy-writing process, make sure you have the following in place:

  • Stakeholder access: You will need input from legal counsel, IT/security, HR, and at least one executive sponsor. Identify these people before you start so you are not waiting mid-process.
  • AI tool inventory: You need a list of every AI tool employees currently use, both approved and unapproved. If you do not have this, Step 1 covers how to build it.
  • Regulatory context: Know which regulations apply to your organization (EU AI Act, HIPAA, SOC 2, state privacy laws). Your legal team should confirm this list.
  • Time estimate: The full process takes 12-24 business days manually, or 5-9 business days with PolicyGuard. The biggest variable is legal review turnaround time.

Step-by-Step: How to Write an AI Policy

Step 1: Conduct an AI Tool Usage Audit

The first step is building a complete picture of how AI tools are currently used across your organization. This means documenting every AI tool, who uses it, what data flows into it, and whether it was formally approved. You cannot write an effective policy without knowing what you are governing. Most organizations discover 3-5x more AI tools than they expected during this step because employees adopt tools independently without notifying IT.

Start by sending a survey to all department heads asking them to list every AI tool their teams use, including free tools and browser extensions. Cross-reference this with IT procurement records to identify tools purchased through formal channels. Then check browser extension reports, SSO logs, and network traffic logs for AI-related domains that employees access. Document each tool with its name, vendor, purpose, department, data types processed, and approval status.

The tools you need for this step are a spreadsheet or inventory management tool, access to IT procurement records, and SSO or network logs. With PolicyGuard, shadow AI detection automates the discovery process and provides a real-time inventory. This step is done when you have a complete spreadsheet listing every AI tool with its owner, purpose, data types, and approval status. The most common mistake is relying solely on surveys without checking network or SSO logs, which misses the shadow AI tools that pose the highest risk.

Step 2: Define the Policy Scope

Scope defines who the policy applies to, what activities it covers, and where its boundaries are. A policy without clear scope creates confusion about whether contractors are covered, whether personal AI use on company devices counts, or whether AI features embedded in existing software like Microsoft Copilot fall under the policy. Ambiguous scope is the number one reason policies fail during audits because auditors cannot verify compliance against vague requirements.

Write explicit statements covering these dimensions: personnel scope (all employees, contractors, temporary workers, and third-party vendors with access to company data), tool scope (all AI-powered tools including standalone applications, browser extensions, embedded AI features in existing software, and API integrations), activity scope (generating content, analyzing data, making recommendations, automating decisions, and any other use of AI capabilities), and device scope (company-owned devices, personal devices used for work, and cloud environments). For each dimension, list specific inclusions and exclusions.

The tool you need for this step is a word processor or policy management platform. This step is done when you have a scope statement that a new employee could read and immediately understand whether a specific tool or activity falls under the policy. The most common mistake is writing scope too narrowly, covering only ChatGPT while ignoring the dozens of other AI tools employees use including AI features embedded in approved software like Notion AI, Grammarly, and GitHub Copilot.

Step 3: Classify AI Tools by Risk Level

Risk classification determines which tools require the most controls and which can be used with lighter governance. Without classification, organizations either apply heavy controls to every tool (which creates friction and drives employees to circumvent the policy) or apply light controls everywhere (which leaves high-risk tools unprotected). A tiered approach lets you match governance effort to actual risk, which is what auditors expect to see.

Create three or four risk tiers. A common model uses: Tier 1 (Prohibited) for tools that process regulated data without adequate safeguards or make autonomous decisions affecting individuals; Tier 2 (Restricted) for tools that process sensitive internal data and require approval and monitoring; Tier 3 (Standard) for tools approved for general business use with basic guidelines; and Tier 4 (Low Risk) for tools with minimal data exposure that can be used freely. For each tool in your inventory from Step 1, assign a tier based on two factors: the sensitivity of data the tool processes and the impact of decisions the tool influences. Document the rationale for each classification.

You will need your completed AI tool inventory from Step 1, input from your IT security team on data sensitivity, and input from business owners on decision impact. This step is done when every tool in your inventory has an assigned risk tier with documented rationale. The most common mistake is classifying tools based only on the vendor's marketing materials instead of analyzing actual data flows and decision impacts within your specific organization.

Step 4: Draft the 12 Core Policy Sections

The 12 sections represent the complete set of topics that auditors, regulators, and employees expect an AI policy to address. Skipping a section creates a gap that auditors will flag and that employees will interpret as permission to act without guidance. Each section should be specific enough that an employee can read it and know exactly what they are allowed and not allowed to do, with no room for interpretation.

The 12 sections are: (1) Purpose and Objectives, which states why the policy exists and what it aims to achieve; (2) Scope, which you defined in Step 2; (3) Definitions, which clarifies terms like AI, machine learning, automated decision-making, and shadow AI; (4) Roles and Responsibilities, which assigns ownership for policy enforcement, tool approval, incident response, and annual review; (5) Approved and Prohibited Tools, which lists specific tools by risk tier; (6) Data Handling Requirements, which specifies what data types can and cannot be used with AI tools; (7) Output Review and Human Oversight, which requires human review before AI outputs are used in decisions; (8) Intellectual Property and Confidentiality, which addresses AI-generated content ownership and confidentiality obligations; (9) Vendor Assessment Requirements, which details how new AI tools are evaluated before approval; (10) Training Requirements, which specifies what training employees must complete and how often; (11) Incident Reporting, which explains how to report AI-related incidents; and (12) Compliance Monitoring and Enforcement, which details how violations are detected, investigated, and addressed.

You will need the outputs from Steps 1-3, a word processor or policy management platform, and input from legal, HR, IT, and executive stakeholders on sections relevant to their domains. This step is done when you have a complete draft with all 12 sections written in clear, specific language that a non-technical employee can understand. The most common mistake is writing sections in vague, aspirational language like "employees should use AI responsibly" instead of specific, enforceable language like "employees must not input customer personal data into any Tier 2 or higher AI tool without written approval from their department head."

Step 5: Conduct Legal Review

Legal review ensures your policy complies with applicable regulations, does not create unintended contractual obligations, and is enforceable under employment law. Skipping legal review is the fastest way to create a policy that either violates regulations (exposing the organization to fines) or contains unenforceable provisions (making the policy useless when you need to act on a violation). Legal review also catches language that could conflict with existing employment agreements, union contracts, or data processing agreements.

Send the complete draft to your legal counsel with a cover memo that includes: the list of regulations you believe apply (EU AI Act, state privacy laws, HIPAA, etc.), any industry-specific requirements, the list of AI tools covered, and specific questions about enforceability in your jurisdiction. Ask legal to review for regulatory compliance, enforceability of disciplinary provisions, consistency with existing employment agreements, data processing requirements under privacy laws, and intellectual property implications of AI-generated outputs. Schedule a 60-minute review meeting to walk through their feedback rather than relying solely on written comments, because policy language nuances are easier to resolve in conversation.

You will need the complete policy draft from Step 4, access to legal counsel (internal or external), and a list of applicable regulations. This step is done when legal counsel has reviewed every section, provided written approval or requested specific changes, and you have incorporated all required changes. The most common mistake is sending legal a draft without context, which leads to generic feedback and multiple review rounds. Providing the cover memo with specific regulatory context and questions cuts the review cycle from weeks to days.

Step 6: Obtain Leadership Approval

Leadership approval gives the policy organizational authority and signals to employees that compliance is mandatory, not optional. Without executive sign-off, employees treat the policy as a suggestion from the compliance team rather than an organizational requirement. Leadership approval also establishes budget authority for enforcement tools, training programs, and ongoing governance activities. Auditors specifically ask for evidence of leadership approval because it demonstrates tone from the top.

Prepare a one-page executive summary that covers: the business risk of operating without an AI policy (data exposure incidents, regulatory fines, audit failures), the scope of the policy, the key restrictions it introduces, the enforcement approach, and the resources required for implementation. Present this to your executive sponsor first to get their buy-in, then schedule a formal approval with the required leadership group (typically C-suite or VP-level depending on your organization). Record the approval in writing with names, titles, dates, and any conditions attached to the approval. Store this approval record as part of your audit evidence.

You will need the legally reviewed policy draft from Step 5, a one-page executive summary, and access to your executive sponsor and approval authority. This step is done when you have a signed or documented approval from the required leadership level, with the approval stored in your document management system. The most common mistake is presenting the full policy document to executives instead of a concise summary, which leads to delayed approvals as executives push the review to the next quarter.

Step 7: Deploy with Acknowledgment Tracking

Deployment means distributing the policy to every person in scope and collecting signed acknowledgments that they received, read, and understood it. Acknowledgment tracking is not optional. It is a core audit requirement. Without timestamped acknowledgment records, you cannot prove that employees were informed of the policy, which means you cannot hold anyone accountable for violations and auditors will flag the gap. Deployment also includes making the policy accessible in a permanent, easy-to-find location.

Choose a deployment method that captures acknowledgments with timestamps and employee identification. Options include policy management platforms with built-in acknowledgment workflows, e-signature platforms like DocuSign, or dedicated compliance tools like PolicyGuard that combine distribution, acknowledgment, and evidence collection. Send the policy to all employees in scope with a clear deadline for acknowledgment (typically 5-10 business days). Include a brief cover message explaining what the policy is, why it matters, and what happens if the acknowledgment deadline is missed. Publish the policy in your intranet, knowledge base, or policy portal so employees can reference it later. Set up automated reminders for employees who have not acknowledged by the deadline. Track completion rates daily and escalate non-compliance to managers after the deadline passes.

You will need the approved policy from Step 6, a distribution and acknowledgment tracking tool, an employee directory with email addresses, and manager contact information for escalation. This step is done when 100% of in-scope employees have signed acknowledgments with timestamps stored in a system that can produce an export for auditors. The most common mistake is distributing the policy via email without acknowledgment tracking, which means you have no proof of receipt and cannot demonstrate compliance during an audit.

Step 8: Set Up Enforcement and Schedule Annual Review

Enforcement turns your policy from a document into an operational control. Without enforcement mechanisms, the policy exists only on paper and employees learn quickly that there are no consequences for non-compliance. Annual review ensures the policy stays current as AI tools evolve, new regulations take effect, and organizational AI usage patterns change. Auditors specifically check for evidence of active enforcement and regular review because these demonstrate that governance is ongoing, not a one-time exercise.

Set up enforcement at three levels. First, technical enforcement: configure tools to block prohibited AI applications, require approval workflows for restricted tools, and monitor usage patterns for policy violations. Second, procedural enforcement: define the investigation process for reported violations, the escalation path, and the disciplinary actions by severity level. Third, reporting enforcement: establish monthly or quarterly reporting on policy compliance metrics including acknowledgment rates, training completion, violation counts, and resolution times. For the annual review, set a calendar reminder for 12 months after approval. Define the review process: who participates, what data they review, what triggers an out-of-cycle review (such as a new regulation or a major AI incident), and how changes are approved and re-deployed. Document the review schedule in the policy itself so auditors can verify it is being followed.

You will need enforcement tools (monitoring software, approval workflow systems, or PolicyGuard for integrated enforcement), HR input on disciplinary processes, and calendar access for scheduling the annual review. This step is done when technical enforcement is active, the investigation and disciplinary process is documented, compliance reporting is running, and the annual review is scheduled with assigned owners. The most common mistake is treating enforcement as a future task rather than a deployment requirement, which creates a gap between policy publication and actual enforcement that auditors will identify and flag.

Common Mistakes When Writing an AI Policy

  • Writing scope too narrowly. Many organizations write a policy that only covers ChatGPT or one specific tool. Employees then use dozens of other AI tools without guidance because those tools are technically outside the policy scope. The cost is uncontrolled data exposure and audit findings for incomplete governance. Avoid this by writing scope to cover all AI-powered tools, including embedded AI features in existing software.
  • Using vague language instead of specific requirements. Phrases like "use AI responsibly" and "exercise good judgment" are not enforceable and give auditors nothing to verify. The cost is a policy that looks good on paper but provides zero operational control. Avoid this by writing specific, testable requirements for every section.
  • Skipping acknowledgment tracking. Distributing the policy by email or posting it on the intranet without collecting signed acknowledgments means you cannot prove employees received it. The cost is an audit finding for every employee without a documented acknowledgment and an inability to enforce violations. Avoid this by using a tool that captures timestamped acknowledgments.
  • Not involving legal early enough. Drafting all 12 sections without legal input leads to multiple revision cycles because legal flags fundamental issues with approach, not just wording. The cost is weeks of additional delay and potential regulatory non-compliance. Avoid this by briefing legal on your approach before drafting and getting their input on the regulatory requirements that should shape the content.
  • Treating the policy as a one-time project. Publishing the policy and moving on without enforcement or review means the policy becomes outdated within months as employees adopt new AI tools and regulations change. The cost is a false sense of security and inevitable audit failures. Avoid this by building enforcement and annual review into the deployment plan from day one.

Write Your AI Policy Faster

PolicyGuard provides pre-built AI policy templates covering all 12 sections, automated acknowledgment tracking, and enforcement tools that turn your policy into an operational control.

Start free trial

PolicyGuard helps companies like yours get AI governance documentation audit-ready in 48 hours or less.

Start free trial →

How Long This Takes

PhaseManualWith PolicyGuard
AI Tool Usage Audit3-5 days1-2 days
Drafting2-4 days2-4 hours
Legal Review3-7 days3-7 days
Leadership Approval1-3 days1-3 days
Deployment3-5 days1-2 hours
Total12-24 days5-9 days

Frequently Asked Questions

How long should an AI policy be?

A complete AI policy covering all 12 sections typically runs 8-15 pages. Shorter policies usually have gaps that auditors will flag. Longer policies often contain too much procedural detail that belongs in separate operational documents. Aim for comprehensive coverage with concise language. Each section should be long enough to be specific and enforceable but short enough that employees will actually read it.

Who should own the AI policy?

The AI policy should have a single owner who is accountable for keeping it current, typically the Chief Compliance Officer, Chief Information Security Officer, or VP of Legal. The owner does not write every section alone but is responsible for coordinating input from legal, IT, HR, and business leaders, and for ensuring the annual review happens on schedule. Without a single owner, policies stall in committee and no one is accountable for gaps.

Do we need a separate AI policy or can we add AI to our existing acceptable use policy?

A separate AI policy is strongly recommended. Adding a few AI paragraphs to an existing acceptable use policy does not provide the depth auditors expect on topics like risk classification, vendor assessment, training requirements, and enforcement mechanisms. A standalone AI policy also makes it easier to track acknowledgments specifically for AI and to update the policy as AI regulations evolve without re-deploying the entire acceptable use policy.

How often should an AI policy be reviewed?

At minimum annually, with triggers for out-of-cycle reviews. Those triggers should include: a new regulation taking effect (such as a new EU AI Act implementation deadline), a significant AI incident at your organization or a peer organization, a major change in AI tool adoption patterns, or a merger or acquisition that changes the scope of AI usage. Most organizations review quarterly in the first year and then move to annual reviews once the policy stabilizes.

What happens if employees do not acknowledge the AI policy?

Employees who do not acknowledge the policy create both a compliance gap and an enforcement gap. From a compliance perspective, auditors will flag every employee without a signed acknowledgment as a finding. From an enforcement perspective, disciplinary action for policy violations is harder to defend when the employee can claim they were never informed of the policy. Escalate non-acknowledgment to the employee's manager after the deadline, then to HR if it continues. Some organizations include policy acknowledgment as a condition of continued system access.

Write Your AI Policy Faster

PolicyGuard provides pre-built AI policy templates, automated acknowledgment tracking, and enforcement tools so you can go from first draft to full deployment in days instead of weeks.

Start free trial
AI Policy TemplateAI PolicyEnterprise AI

Frequently Asked Questions

How long should an AI policy be?+
A complete AI policy covering all 12 sections typically runs 8-15 pages. Shorter policies usually have gaps that auditors will flag. Longer policies often contain too much procedural detail that belongs in separate operational documents. Aim for comprehensive coverage with concise language. Each section should be long enough to be specific and enforceable but short enough that employees will actually read it.
Who should own the AI policy?+
The AI policy should have a single owner who is accountable for keeping it current, typically the Chief Compliance Officer, Chief Information Security Officer, or VP of Legal. The owner does not write every section alone but is responsible for coordinating input from legal, IT, HR, and business leaders, and for ensuring the annual review happens on schedule. Without a single owner, policies stall in committee and no one is accountable for gaps.
Do we need a separate AI policy or can we add AI to our existing acceptable use policy?+
A separate AI policy is strongly recommended. Adding a few AI paragraphs to an existing acceptable use policy does not provide the depth auditors expect on topics like risk classification, vendor assessment, training requirements, and enforcement mechanisms. A standalone AI policy also makes it easier to track acknowledgments specifically for AI and to update the policy as AI regulations evolve without re-deploying the entire acceptable use policy.
How often should an AI policy be reviewed?+
At minimum annually, with triggers for out-of-cycle reviews. Those triggers should include: a new regulation taking effect (such as a new EU AI Act implementation deadline), a significant AI incident at your organization or a peer organization, a major change in AI tool adoption patterns, or a merger or acquisition that changes the scope of AI usage. Most organizations review quarterly in the first year and then move to annual reviews once the policy stabilizes.
What happens if employees do not acknowledge the AI policy?+
Employees who do not acknowledge the policy create both a compliance gap and an enforcement gap. From a compliance perspective, auditors will flag every employee without a signed acknowledgment as a finding. From an enforcement perspective, disciplinary action for policy violations is harder to defend when the employee can claim they were never informed of the policy. Escalate non-acknowledgment to the employee's manager after the deadline, then to HR if it continues. Some organizations include policy acknowledgment as a condition of continued system access.
Write Your AI Policy Faster+
PolicyGuard provides pre-built AI policy templates, automated acknowledgment tracking, and enforcement tools so you can go from first draft to full deployment in days instead of weeks. Start free trial

PolicyGuard Team

PolicyGuard

Building PolicyGuard AI — the compliance layer for enterprise AI governance.

Continue Reading

Ready to get AI governance sorted?

Join companies using PolicyGuard to enforce AI policies and generate audit-ready documentation.

Ready to govern every AI tool your team uses?

One platform to enforce policies, track compliance, and prove governance across 80+ AI tools.

Book a demo