Tracking AI tool usage requires three detection methods in combination: browser extension monitoring for web-based tools, OAuth monitoring for AI apps connected to corporate accounts, and DNS monitoring for complete network-level visibility.
No single detection method covers all AI tool usage patterns. Browser extensions catch web-based AI interactions but miss desktop applications. OAuth monitoring catches AI integrations with corporate accounts but misses standalone tools. DNS monitoring catches all network requests but cannot distinguish between casual visits and active usage. Combining all three creates layered visibility that is difficult for any AI tool to evade.
Your organization published an AI policy six months ago. You have a list of approved tools and training completion rates above ninety percent. By every documented measure, your AI governance program looks healthy. But you have a nagging question: do you actually know which AI tools employees use every day? Most organizations discover, when they implement real monitoring, that their approved tool list covers less than half of actual AI tool usage. The rest is shadow AI, tools adopted by individuals and teams without approval, assessment, or oversight. This guide walks through seven steps to build comprehensive AI tool tracking that gives you real visibility into what is happening across your organization. For context on why shadow AI tracking matters, see our analysis of shadow AI risks.
Before You Start
Before deploying monitoring tools, make sure three foundations are in place. First, you need executive sponsorship for AI tool monitoring. Monitoring employee tool usage touches privacy, legal, and employee relations concerns that require leadership approval. Get written authorization from your CISO, CIO, or equivalent, and review the monitoring plan with legal counsel to ensure compliance with employment law and privacy regulations in every jurisdiction where you operate. Second, you need clear communication to employees about what will be monitored and why. Surprise surveillance destroys trust and can create legal liability. Update your acceptable use policy to include AI tool monitoring, communicate the monitoring program through the same channels you used for the AI policy, and provide employees with a clear explanation of what data is collected, how it is used, and who has access to it. Third, you need a defined list of approved and prohibited AI tools that monitoring will be measured against. Without this list, monitoring data has no governance context because you cannot flag a tool as unauthorized if you have not defined what is authorized.
Step-by-Step Guide
Step 1: Deploy Browser Extension Monitoring
Action: Deploy a browser extension to all managed endpoints that detects when employees access AI tool websites. The extension should monitor URL patterns associated with known AI tools, including ChatGPT, Claude, Gemini, Midjourney, Jasper, Copy.ai, GitHub Copilot web interface, Perplexity, and any other tools relevant to your industry. Configure the extension to log the tool name, user identity, timestamp, and session duration without capturing the content of AI interactions. Deploy through your endpoint management platform such as Intune, Jamf, or Group Policy to ensure coverage across all managed devices.
Why this matters: Browser-based AI tools represent the largest category of AI usage in most organizations because they require no installation and no IT approval. An employee can start using ChatGPT or Claude in thirty seconds by opening a browser tab. Without browser-level monitoring, this usage is completely invisible to traditional IT monitoring tools. Browser extension monitoring provides the earliest possible detection of AI tool usage because it operates at the point of interaction rather than at the network or account level. It also provides user-level attribution that network monitoring cannot deliver, so you know exactly which employee accessed which tool and when.
Tools: Browser extension management through MDM platforms like Microsoft Intune or Jamf for deployment, a managed browser extension that identifies AI-related URLs, and a centralized logging platform that aggregates extension data across the organization. PolicyGuard provides a purpose-built browser extension that identifies over two hundred AI tools and reports usage to a centralized dashboard with user-level attribution. For more on detecting unauthorized tools specifically, see our guide on detecting unauthorized AI tool usage.
Done when: The browser extension is deployed to all managed endpoints, initial data collection confirms that usage events are being captured correctly, and at least one week of baseline data has been collected before any enforcement actions begin.
Common mistake: Deploying the extension only to Chrome and missing Safari, Firefox, or Edge usage. Deploy across all browsers that your organization supports, or restrict browser usage to a single managed browser where the extension is active.
Step 2: Configure OAuth Integration Monitoring
Action: Configure monitoring for OAuth grants and API integrations that connect AI tools to your corporate accounts. Review your Google Workspace admin console, Microsoft Entra ID, and Slack admin panel for third-party AI applications that employees have authorized. Set up alerts for new OAuth grants to known AI services. Focus on integrations where AI tools receive read or write access to corporate email, documents, calendars, or messaging platforms because these integrations create the highest data exposure risk.
Why this matters: OAuth integrations represent a different and often more dangerous category of AI tool usage than browser-based access. When an employee grants an AI tool OAuth access to their corporate Google account, that tool can read their email, access their documents, and analyze their calendar data on an ongoing basis without the employee actively using it each day. A single OAuth grant can expose more data than months of manual browser-based AI usage because the integration operates continuously and has broad access to the connected account. OAuth monitoring catches integrations that browser monitoring misses because the AI tool may operate entirely through API calls without generating browser URL events.
Tools: Google Workspace Admin Console for reviewing and alerting on third-party app access, Microsoft Entra ID for monitoring enterprise application consents, Slack admin dashboard for reviewing AI bot and integration installations, and CASB platforms that aggregate OAuth grants across multiple SaaS platforms. PolicyGuard monitors OAuth grants across Google Workspace and Microsoft 365, alerting when employees authorize AI tools to access corporate data.
Done when: OAuth monitoring is configured for all corporate identity platforms, existing AI-related OAuth grants have been inventoried and reviewed, alerts are active for new AI tool authorizations, and a process exists to revoke unauthorized grants within twenty-four hours of detection.
Common mistake: Monitoring only the primary identity platform and missing integrations through secondary platforms. If your organization uses both Google Workspace and Microsoft 365, or if employees use personal accounts for some Slack workspaces, you need monitoring across all platforms where OAuth grants can expose corporate data.
Step 3: Set Up DNS Monitoring
Action: Configure DNS monitoring to detect network requests to known AI service domains. Build a domain list that includes the primary domains and API endpoints for all major AI services such as openai.com, api.openai.com, anthropic.com, api.anthropic.com, gemini.google.com, midjourney.com, and others. Configure your DNS resolver, firewall, or secure web gateway to log requests to these domains and generate alerts for new AI domains that appear in your network traffic. DNS monitoring should cover all network segments including corporate WiFi, VPN connections, and any other network paths employees use to access the internet from managed devices.
Why this matters: DNS monitoring provides the broadest visibility of any single detection method because every internet-connected tool must resolve a domain name to function. Desktop AI applications, API integrations, mobile apps, and browser tools all generate DNS requests. DNS monitoring catches usage that browser extensions miss, such as AI tools running as desktop applications, command-line tools calling AI APIs, and mobile apps on corporate networks. It also catches attempts to access AI tools through alternative domains, proxy services, or API endpoints that may not match browser extension URL patterns. DNS monitoring is your safety net that catches what the other two detection methods miss.
Tools: Enterprise DNS resolver with logging capabilities such as Cisco Umbrella, Cloudflare Gateway, or Infoblox. Firewall or secure web gateway with domain-based alerting. SIEM platform for aggregating DNS logs with browser and OAuth monitoring data. PolicyGuard integrates with DNS monitoring platforms to correlate network-level detection with browser and OAuth data in a unified dashboard.
Done when: DNS monitoring is active across all corporate network segments, the AI domain list includes all known AI services and is scheduled for monthly updates, alerts are configured for new AI domains not on the approved or prohibited lists, and at least one week of baseline data has been collected.
Common mistake: Failing to update the AI domain list as new AI tools launch. The AI tool landscape changes rapidly, with new tools launching weekly. Schedule monthly reviews of your AI domain list and subscribe to threat intelligence feeds that track new AI service domains.
Step 4: Build Approved and Prohibited Tool Lists
Action: Create two definitive lists: approved AI tools that employees are permitted to use, and prohibited AI tools that employees must not use. For each approved tool, document the approved use cases, data classification limits, required configurations such as disabling training on corporate data, and the user groups authorized to access it. For each prohibited tool, document the reason for prohibition such as data residency concerns, lack of enterprise security features, or failure to pass vendor assessment. Publish both lists in a location every employee can access and update them within forty-eight hours whenever a new tool is assessed.
Why this matters: Monitoring data without classification context generates noise rather than intelligence. When your monitoring system detects an employee accessing an AI tool, the first question is whether that tool is approved, prohibited, or unknown. Without definitive lists, every detection event requires manual investigation to determine whether it represents a policy violation. With lists, your monitoring system can automatically categorize events as compliant, non-compliant, or requiring review. This automation transforms monitoring from a manual review burden into an efficient governance operation. Clear lists also give employees unambiguous guidance, eliminating the common excuse of not knowing whether a specific tool was allowed.
Tools: A governance platform or structured document system for maintaining the lists, integration with your monitoring tools so that lists are automatically applied to detection events, and a change management process for adding or removing tools from each list. PolicyGuard maintains dynamic approved and prohibited tool lists that automatically classify monitoring events and generate alerts for prohibited tool usage.
Done when: Both lists are published and accessible to all employees, monitoring systems are configured to classify events against the lists, a process exists for requesting tool assessment and addition to either list, and the lists have been reviewed by IT security and legal.
Common mistake: Creating lists that are too narrow. If your approved list contains five tools but your organization uses thirty AI tools, twenty-five tools will generate unclassified alerts that overwhelm your review capacity. Be comprehensive in your assessment to minimize the unknown category.
Step 5: Configure New Tool Detection Alerts
Action: Configure your monitoring system to generate alerts when an AI tool that is not on either the approved or prohibited list is detected in your environment. These unknown tool alerts should be routed to the person or team responsible for AI tool assessment with all available context: which employee accessed the tool, what detection method identified it, when it was first seen, and how many employees have used it. Set a service level agreement that unknown tools are assessed and classified within five business days of first detection.
Why this matters: The AI tool landscape evolves faster than any static list can track. New AI tools launch weekly, and employees discover and adopt them before governance teams are aware they exist. New tool detection alerts ensure that your approved and prohibited lists stay current by surfacing tools you have not yet assessed. Without this alerting, your lists gradually become outdated and an increasing percentage of AI usage falls into the unclassified gap where you have neither approved nor prohibited a tool. The five-day assessment SLA ensures that new tools are governed promptly rather than accumulating in a backlog that creates growing risk exposure.
Tools: Monitoring platform alert configuration for unknown tool detection, ticketing system integration for routing alerts to the assessment team with SLA tracking, and a lightweight assessment template that can be completed within the five-day SLA. PolicyGuard automatically detects new AI tools in your environment and creates assessment tickets with pre-populated information to accelerate the classification decision.
Done when: Unknown tool alerts are active across all three monitoring methods, alerts route to the assessment team with full context, the five-day SLA is documented and tracked, and the first round of unknown tool alerts has been processed to validate the workflow.
Common mistake: Setting alert thresholds too low and generating alert fatigue. If every single DNS request to a new AI domain generates an alert, the assessment team will be overwhelmed. Configure alerts to trigger after a threshold, such as three or more employees accessing the same unknown tool or a single employee accessing it more than five times, to focus attention on tools with meaningful adoption rather than one-time visits.
Step 6: Establish Weekly Usage Review
Action: Schedule a weekly thirty-minute review of AI tool usage data. The review should cover new tools detected since the last review, prohibited tool access events and their resolution status, usage trends across approved tools including any unusual spikes or declines, department-level adoption patterns, and progress on any outstanding tool assessments. Assign a consistent review owner and create a standard report template that can be populated from monitoring data in under ten minutes. Document review outcomes and action items.
Why this matters: Monitoring data that is collected but not reviewed provides zero governance value. Weekly reviews transform raw monitoring data into actionable intelligence by identifying patterns that automated alerts may miss. A gradual increase in usage of an unknown tool across multiple departments signals organic adoption that requires faster assessment. A sudden decline in approved tool usage may indicate that employees are switching to unapproved alternatives. Department-level patterns reveal which teams need additional training or which teams have tool gaps that governance should address. The weekly cadence balances timeliness with sustainability because daily reviews are too burdensome and monthly reviews allow too much ungoverned usage to accumulate.
Tools: Monitoring platform reports and dashboards for data aggregation, a standard review template in spreadsheet or document format, a task tracking system for documenting action items, and calendar scheduling for consistent review timing. PolicyGuard provides a weekly summary report that highlights new tools, violations, trends, and outstanding assessments in a format designed for thirty-minute executive review.
Done when: The weekly review is scheduled on a recurring calendar, the review template is created and populated with the first week of data, a review owner is assigned and has conducted the first review, and action items from the first review have been documented and assigned.
Common mistake: Scheduling the review but not protecting the time. If the weekly review is regularly skipped or postponed due to competing priorities, monitoring data accumulates without governance action. Treat the review as a non-negotiable governance obligation and escalate to leadership if it is consistently deprioritized.
Step 7: Generate Monthly Usage Reports
Action: Produce a monthly AI tool usage report for governance stakeholders including the CISO, compliance lead, and any AI governance committee members. The report should contain a summary dashboard with total AI tool usage across the organization, a breakdown by approved versus prohibited versus unknown tools, department-level usage rankings, new tools detected and their classification status, violation counts and resolution rates, trend comparisons against previous months, and recommendations for tool additions, removals, or policy changes based on the data. Distribute the report within five business days of month-end and archive it for audit purposes.
Why this matters: Monthly reports provide the governance narrative that weekly reviews cannot deliver. While weekly reviews focus on immediate action items, monthly reports reveal trends, measure program effectiveness, and inform strategic decisions. A monthly report showing that shadow AI decreased from thirty percent of usage to ten percent over three months demonstrates that your governance program is working. A report showing that one department consistently has the highest prohibited tool usage signals a need for targeted intervention. These reports also serve a critical audit function by creating a timestamped record of governance activity that demonstrates to auditors and regulators that AI tool usage is actively monitored and governed over time.
Tools: Monitoring platform reporting and export capabilities, data visualization tools for trend charts and department breakdowns, document templates for consistent report formatting, and a distribution list for governance stakeholders. PolicyGuard generates monthly AI usage reports with department breakdowns, trend analysis, and violation summaries that can be exported in PDF or CSV format for governance review and audit archival.
Done when: The first monthly report has been generated, reviewed by the governance team, and distributed to stakeholders. The report template is standardized, the distribution list is confirmed, and the report is archived in the audit evidence repository.
Common mistake: Creating reports that are too long or too detailed for the audience. Governance stakeholders need a one-page executive summary with the option to drill into detail. Lead with the three most important findings and recommendations, then provide supporting data in an appendix. Reports that require thirty minutes to read will not be read.
Common Mistakes
- Relying on a single detection method. Browser monitoring alone misses desktop apps and API integrations. OAuth monitoring alone misses standalone tools. DNS monitoring alone cannot attribute usage to specific users. All three methods together provide comprehensive coverage.
- Deploying monitoring without employee communication. Surprise surveillance destroys trust and may violate employment or privacy laws in many jurisdictions. Always communicate monitoring plans before deployment.
- Collecting data without reviewing it. Monitoring tools that collect data nobody reviews provide zero governance value while creating privacy liability. If you collect it, review it.
- Static tool lists that go stale. The AI landscape changes weekly. Tool lists that are not updated monthly become increasingly inaccurate, creating false confidence in compliance metrics.
- Alert fatigue from poorly configured thresholds. Too many low-priority alerts cause the governance team to ignore or deprioritize monitoring entirely. Tune thresholds to focus on actionable events.
Get Complete Visibility Into AI Tool Usage
PolicyGuard combines browser monitoring, OAuth detection, and DNS analysis into a single platform. See every AI tool in use across your organization with user-level attribution and automated compliance classification.
Start free trialPolicyGuard helps companies like yours get AI governance documentation audit-ready in 48 hours or less.
Start free trial →How Long Does Each Step Take?
| Step | Setup Time | Ongoing Effort |
|---|---|---|
| Deploy browser extension monitoring | 2-4 hours | Minimal maintenance |
| Configure OAuth integration monitoring | 1-2 hours | Monthly review |
| Set up DNS monitoring | 2-4 hours | Minimal maintenance |
| Build approved/prohibited tool lists | 4-8 hours | Monthly updates |
| Configure new tool detection alerts | 1-2 hours | Quarterly threshold tuning |
| Establish weekly usage review | 1 hour | 30 min/week |
| Generate monthly usage reports | 2-3 hours | 2-3 hours/month |
Frequently Asked Questions
Can employees bypass AI tool monitoring by using personal devices?
Yes, and this is the primary limitation of any monitoring approach. Browser extensions and endpoint monitoring only cover managed devices. Employees using personal phones or laptops to access AI tools are invisible to these methods. DNS monitoring covers personal devices if they are on the corporate network or VPN, but not if they use cellular data. The most effective mitigation is to make approved tools so convenient and capable that employees prefer using them on managed devices over using alternatives on personal ones. Combining technical monitoring with a strong training and acknowledgment program reduces personal device workarounds because employees understand the risks and consequences.
What privacy considerations apply to AI tool monitoring?
AI tool monitoring must comply with employment law and privacy regulations in every jurisdiction where you operate. In the EU, monitoring may require a legitimate interest assessment, employee notification, and works council consultation under GDPR. In the US, requirements vary by state. California has stricter notification requirements than most other states. Best practice is to be transparent about monitoring in your acceptable use policy, limit data collection to what is necessary for governance purposes, restrict access to monitoring data to a small governance team, and retain data only as long as needed for compliance purposes. Consult employment counsel before deploying monitoring.
How do you handle AI features embedded in existing tools like Notion AI or Slack AI?
Embedded AI features are among the hardest to track because they do not generate separate tool access events. When an employee uses Notion AI within Notion, it does not create a distinct browser URL or OAuth grant because the employee is already authenticated to Notion. The most effective approach is to identify which of your existing SaaS tools have AI features, document whether those features are approved or restricted, and use the SaaS vendor's admin console to monitor AI feature usage where available. Some vendors provide usage analytics for their AI features. Where vendor analytics are not available, include embedded AI features in employee training so that employees understand which embedded features they may and may not use.
What should you do when monitoring reveals widespread shadow AI usage?
Do not panic and do not immediately block everything. Widespread shadow AI usage is normal and expected in organizations that have not previously monitored AI tool adoption. Start by analyzing what tools are being used and for what purposes. Identify tools that can be quickly approved with appropriate guardrails because they meet security requirements. Identify tools that pose genuine security risks and must be blocked with approved alternatives provided. Communicate the findings to employees with a clear timeline for tool assessments and a commitment to making the approved tool catalog meet their needs. Organizations that respond to shadow AI discovery with mass blocking and punitive measures drive usage further underground.
How often should you update your AI tool detection signatures?
Update your AI tool detection list monthly at minimum. The AI tool landscape is expanding rapidly, with new tools launching every week and existing tools changing domains or adding new API endpoints. Subscribe to threat intelligence feeds and AI tool directories that track new launches. Review industry publications and employee tool requests for emerging tools that should be added to your detection list. PolicyGuard updates its AI tool database continuously, adding new tools as they emerge so that your monitoring stays current without manual maintenance.
Track Every AI Tool in Your Organization
PolicyGuard provides layered AI tool detection across browser, OAuth, and DNS monitoring. Get a complete picture of AI usage with zero blind spots and automated compliance classification.
Start free trial








