What AI Tools Are Employees Using Without Permission?

P
PolicyGuard Team
6 min read
What AI Tools Are Employees Using Without Permission? - PolicyGuard AI

The most commonly used unauthorized AI tools at work are ChatGPT (personal accounts), Claude, Gemini, Microsoft Copilot (personal), Grammarly, Perplexity, Midjourney, Notion AI, Otter.ai, and ElevenLabs.

Employees adopt these tools to work faster, but personal accounts lack enterprise data protections. Every prompt entered into a personal AI account potentially exposes proprietary data, customer information, or regulated records to third-party training datasets.

TL;DR: ChatGPT personal accounts, Grammarly, and personal Copilot are the most common unauthorized AI tools used at work.

Shadow AI: AI tools used by employees without organizational knowledge, approval, or governance controls.

Shadow AI is not a hypothetical risk. Research consistently shows that over 70% of employees use AI tools that their IT department has not approved. The gap between approved tools and actual usage creates data exposure, compliance violations, and audit failures. Here are the tools employees use most, why they use them, and how to detect unauthorized usage.

Top 15 Unauthorized AI Tools

This table ranks tools by frequency of unauthorized workplace usage based on aggregated organizational data.

ToolCategoryCommon Work UseRisk LevelWhy Preferred
ChatGPT (personal)General LLMDrafting, analysis, codingHighMost familiar, free tier available
Claude (personal)General LLMWriting, research, analysisHighLonger context, strong reasoning
Gemini (personal)General LLMResearch, summarizationHighGoogle ecosystem integration
Microsoft Copilot (personal)General LLMDocument drafting, emailHighBuilt into browser
GrammarlyWriting assistantEmail, reports, proposalsMediumAlways-on writing improvement
PerplexityAI searchResearch, fact-checkingMediumBetter than traditional search
MidjourneyImage generationPresentations, marketingMediumHigh-quality image output
Notion AIProductivityNotes, project docsMediumIntegrated into existing workflow
Otter.aiTranscriptionMeeting notesHighAutomated meeting summaries
ElevenLabsVoice AIVoiceovers, presentationsMediumRealistic voice synthesis
JasperMarketing AIContent creationMediumMarketing-specific templates
Copy.aiMarketing AIAd copy, social mediaMediumFast content generation
CursorCoding AICode generation, debuggingHighIDE-integrated AI coding
Replit AICoding AIPrototyping, codingHighBrowser-based development
GammaPresentation AISlide decksLowFaster than PowerPoint

The highest-risk tools are those where employees paste large amounts of text: LLMs and transcription services. These tools process and potentially store everything entered into them.

What Data Employees Share

Employees do not intend to leak data. They paste work content into AI tools because it is faster than doing the task manually. The data categories most commonly shared include:

  • Source code: Developers paste code for debugging, refactoring, or generating tests. This can expose proprietary algorithms, API keys, and security vulnerabilities.
  • Customer data: Support teams paste customer emails and tickets for drafting responses. Names, account numbers, and complaint details are exposed.
  • Financial data: Finance teams paste spreadsheet data for analysis. Revenue figures, forecasts, and deal terms enter third-party systems.
  • Legal documents: Legal teams paste contracts and agreements for review. Confidential terms, counterparty information, and negotiation positions are exposed.
  • HR data: HR teams paste employee reviews, compensation data, and disciplinary records for summarization or drafting.
  • Strategic plans: Executives paste strategy documents, board materials, and M&A analysis for editing and summarization.
  • Patient/health data: Healthcare workers paste clinical notes or patient communications, creating HIPAA violations.
  • Meeting transcripts: Otter.ai and similar tools capture entire meetings, including confidential discussions that participants assumed were private.

Why Employees Use Them (Not Malicious)

Understanding motivation is essential for effective governance. Employees use unauthorized AI tools for practical, not malicious, reasons:

  1. No approved alternative exists: The organization has not provided an enterprise AI tool, or the approved tool does not cover their use case.
  2. Approved tools are too slow to access: Procurement and IT approval processes take weeks or months. Employees need help now.
  3. They do not know it is unauthorized: Many employees do not realize that using a personal ChatGPT account for work tasks violates policy, especially if no policy exists.
  4. The approved tool is worse: Enterprise AI tools sometimes have restricted capabilities compared to consumer versions. Employees switch to the better tool.
  5. Peer influence: When one team member shares a productivity trick using an AI tool, the entire team adopts it within days. Adoption spreads faster than governance.

See What AI Tools Your Employees Actually Use

PolicyGuard detects unauthorized AI tools across your organization in real time. Know the full picture before your next audit.

Start free trial

PolicyGuard helps companies like yours get AI governance documentation audit-ready in 48 hours or less.

Start free trial →

How to Find What Your Employees Use

Detection requires multiple methods because no single approach catches everything.

Detection MethodWhat It CatchesWhat It MissesSetup Time
Browser extension monitoringAI tool visits with user context and time spentMobile usage, non-browser AI appsDays
DNS monitoringAll network connections to AI domainsUser identity, what data was sharedHours
OAuth application auditAI apps connected to Google/Microsoft accountsTools used without SSO integrationHours
Expense report reviewPaid AI tool subscriptionsFree tier usageMinutes
Employee surveySelf-reported tool usageTools employees do not want to discloseDays
Endpoint monitoringAI app installations on managed devicesWeb-based tools, personal devicesWeeks

The most effective approach combines browser extension monitoring, DNS monitoring, and OAuth audit. Together, these three methods cover web-based usage, network-level traffic, and application integrations. For detailed analysis of shadow AI risks and mitigation strategies, see our shadow AI risk guide. For building the governance framework that addresses these findings, read our AI policy and governance guide.

Frequently Asked Questions

Is using ChatGPT at work illegal?

Using ChatGPT at work is not illegal by itself. However, entering regulated data (PII, PHI, financial records) into a personal AI account can violate GDPR, HIPAA, or industry regulations. The legal risk depends on what data is shared and what regulations apply to your organization.

Can employers monitor which AI tools employees use?

Yes, on company-managed devices and networks. Employers can monitor browser activity, network traffic, and application usage on corporate assets. Monitoring personal devices requires explicit consent and varies by jurisdiction. Transparency about monitoring practices is both legally required in many regions and practically necessary for employee trust.

What is the biggest risk of unauthorized AI tool usage?

Data exposure. When employees paste proprietary or regulated data into personal AI accounts, that data may be used for model training, stored in jurisdictions that violate data residency requirements, or exposed through future security breaches at the AI provider. The data leaves your control permanently.

Should organizations block all unauthorized AI tools?

Blocking alone drives usage underground. The most effective approach combines blocking high-risk tools at the network level with providing approved alternatives for common use cases. Employees who have a good approved tool rarely seek unauthorized options.

How many AI tools does the average employee use?

Research suggests the average knowledge worker uses 2-4 AI tools regularly, with at least one being unauthorized. In technology and marketing roles, the number is higher: 4-7 tools, with 2-3 being unauthorized. The gap between approved and actual usage grows as AI tools become more specialized.

Discover Your Shadow AI Exposure

PolicyGuard identifies every unauthorized AI tool in your organization and quantifies the data risk. Get visibility before your next audit.

Start free trial
Shadow AIAI Risk ManagementEnterprise AI

Frequently Asked Questions

How many AI tools does the average employee use without employer knowledge?+
Research from multiple sources indicates that the average knowledge worker uses between two and five AI tools that their employer does not know about. Surveys conducted in 2024 and 2025 consistently show that over sixty percent of employees who use AI at work have used at least one tool not provided or approved by their organization. In technology and creative industries, the number is even higher, with some employees using upwards of eight to ten different AI services ranging from chatbots and writing assistants to code generators and image creation tools. The gap between what companies think employees use and what they actually use is one of the largest unmanaged risk areas in enterprise technology today.
What categories of sensitive data do employees most commonly share with AI tools?+
The most commonly exposed data categories follow predictable patterns tied to how employees use AI tools. Source code and technical documentation top the list, as developers use AI coding assistants extensively. Customer data including names, emails, and account details ranks second, shared when employees ask AI to draft communications or analyze support tickets. Internal business documents such as strategy decks, financial projections, and meeting notes are frequently pasted into AI tools for summarization. Employee data including performance reviews and compensation information appears when HR teams use AI for analysis. Legal and contractual documents are shared for review and summarization. Each category carries distinct regulatory and competitive implications.
How do you find out what AI tools your employees are using?+
Discovery requires a multi-layered approach because no single method catches everything. Start with network traffic analysis using DNS logs and web proxy data to identify connections to known AI service domains. Review OAuth application grants in your identity provider to find AI tools employees have authorized with corporate credentials. Examine SaaS management platforms and expense reports for AI subscriptions. Deploy browser extensions on managed devices that catalog visited AI platforms. Conduct anonymous employee surveys asking which AI tools they find most useful for work, framing the question around enablement rather than enforcement to get honest responses. Combine all signals into a centralized AI tool inventory that distinguishes between sanctioned, tolerated, and prohibited tools.
What is the specific risk of employees using personal ChatGPT accounts for work?+
Personal ChatGPT accounts create several specific risks that enterprise accounts are designed to mitigate. First, data retention and training: personal accounts operate under consumer terms of service that may allow OpenAI to use inputs for model training, whereas enterprise agreements include data processing terms that prohibit this. Second, no administrative visibility: the organization has zero insight into what data is being shared or what outputs are being used. Third, no access controls: when the employee leaves, their conversation history containing company data goes with them. Fourth, no compliance documentation: there is no audit trail for regulatory purposes. Fifth, shared device risk: family members or others with access to the employee's personal device could view sensitive business information in chat histories.
Should companies ban popular AI tools entirely or create a governed access program?+
Creating a governed access program is almost always the superior strategy. Outright bans are ineffective because employees find workarounds using personal devices, mobile networks, and consumer accounts, pushing usage underground where it becomes invisible and unmanageable. A governed access program provides enterprise-grade versions of popular AI tools with proper data processing agreements, administrative controls, and audit capabilities. It establishes clear usage guidelines that define what data can and cannot be shared with AI tools. It creates an approval pathway for new tools so employees do not feel forced to go rogue. Companies that have adopted governed access programs report higher employee satisfaction, better security visibility, and significantly reduced shadow AI compared to those that attempted blanket bans.

PolicyGuard Team

PolicyGuard

Building PolicyGuard AI — the compliance layer for enterprise AI governance.

Continue Reading

Ready to get AI governance sorted?

Join companies using PolicyGuard to enforce AI policies and generate audit-ready documentation.

Ready to govern every AI tool your team uses?

One platform to enforce policies, track compliance, and prove governance across 80+ AI tools.

Book a demo