The most commonly used unauthorized AI tools at work are ChatGPT (personal accounts), Claude, Gemini, Microsoft Copilot (personal), Grammarly, Perplexity, Midjourney, Notion AI, Otter.ai, and ElevenLabs.
Employees adopt these tools to work faster, but personal accounts lack enterprise data protections. Every prompt entered into a personal AI account potentially exposes proprietary data, customer information, or regulated records to third-party training datasets.
TL;DR: ChatGPT personal accounts, Grammarly, and personal Copilot are the most common unauthorized AI tools used at work.
Shadow AI: AI tools used by employees without organizational knowledge, approval, or governance controls.
Shadow AI is not a hypothetical risk. Research consistently shows that over 70% of employees use AI tools that their IT department has not approved. The gap between approved tools and actual usage creates data exposure, compliance violations, and audit failures. Here are the tools employees use most, why they use them, and how to detect unauthorized usage.
Top 15 Unauthorized AI Tools
This table ranks tools by frequency of unauthorized workplace usage based on aggregated organizational data.
| Tool | Category | Common Work Use | Risk Level | Why Preferred |
|---|---|---|---|---|
| ChatGPT (personal) | General LLM | Drafting, analysis, coding | High | Most familiar, free tier available |
| Claude (personal) | General LLM | Writing, research, analysis | High | Longer context, strong reasoning |
| Gemini (personal) | General LLM | Research, summarization | High | Google ecosystem integration |
| Microsoft Copilot (personal) | General LLM | Document drafting, email | High | Built into browser |
| Grammarly | Writing assistant | Email, reports, proposals | Medium | Always-on writing improvement |
| Perplexity | AI search | Research, fact-checking | Medium | Better than traditional search |
| Midjourney | Image generation | Presentations, marketing | Medium | High-quality image output |
| Notion AI | Productivity | Notes, project docs | Medium | Integrated into existing workflow |
| Otter.ai | Transcription | Meeting notes | High | Automated meeting summaries |
| ElevenLabs | Voice AI | Voiceovers, presentations | Medium | Realistic voice synthesis |
| Jasper | Marketing AI | Content creation | Medium | Marketing-specific templates |
| Copy.ai | Marketing AI | Ad copy, social media | Medium | Fast content generation |
| Cursor | Coding AI | Code generation, debugging | High | IDE-integrated AI coding |
| Replit AI | Coding AI | Prototyping, coding | High | Browser-based development |
| Gamma | Presentation AI | Slide decks | Low | Faster than PowerPoint |
The highest-risk tools are those where employees paste large amounts of text: LLMs and transcription services. These tools process and potentially store everything entered into them.
What Data Employees Share
Employees do not intend to leak data. They paste work content into AI tools because it is faster than doing the task manually. The data categories most commonly shared include:
- Source code: Developers paste code for debugging, refactoring, or generating tests. This can expose proprietary algorithms, API keys, and security vulnerabilities.
- Customer data: Support teams paste customer emails and tickets for drafting responses. Names, account numbers, and complaint details are exposed.
- Financial data: Finance teams paste spreadsheet data for analysis. Revenue figures, forecasts, and deal terms enter third-party systems.
- Legal documents: Legal teams paste contracts and agreements for review. Confidential terms, counterparty information, and negotiation positions are exposed.
- HR data: HR teams paste employee reviews, compensation data, and disciplinary records for summarization or drafting.
- Strategic plans: Executives paste strategy documents, board materials, and M&A analysis for editing and summarization.
- Patient/health data: Healthcare workers paste clinical notes or patient communications, creating HIPAA violations.
- Meeting transcripts: Otter.ai and similar tools capture entire meetings, including confidential discussions that participants assumed were private.
Why Employees Use Them (Not Malicious)
Understanding motivation is essential for effective governance. Employees use unauthorized AI tools for practical, not malicious, reasons:
- No approved alternative exists: The organization has not provided an enterprise AI tool, or the approved tool does not cover their use case.
- Approved tools are too slow to access: Procurement and IT approval processes take weeks or months. Employees need help now.
- They do not know it is unauthorized: Many employees do not realize that using a personal ChatGPT account for work tasks violates policy, especially if no policy exists.
- The approved tool is worse: Enterprise AI tools sometimes have restricted capabilities compared to consumer versions. Employees switch to the better tool.
- Peer influence: When one team member shares a productivity trick using an AI tool, the entire team adopts it within days. Adoption spreads faster than governance.
See What AI Tools Your Employees Actually Use
PolicyGuard detects unauthorized AI tools across your organization in real time. Know the full picture before your next audit.
Start free trialPolicyGuard helps companies like yours get AI governance documentation audit-ready in 48 hours or less.
Start free trial →How to Find What Your Employees Use
Detection requires multiple methods because no single approach catches everything.
| Detection Method | What It Catches | What It Misses | Setup Time |
|---|---|---|---|
| Browser extension monitoring | AI tool visits with user context and time spent | Mobile usage, non-browser AI apps | Days |
| DNS monitoring | All network connections to AI domains | User identity, what data was shared | Hours |
| OAuth application audit | AI apps connected to Google/Microsoft accounts | Tools used without SSO integration | Hours |
| Expense report review | Paid AI tool subscriptions | Free tier usage | Minutes |
| Employee survey | Self-reported tool usage | Tools employees do not want to disclose | Days |
| Endpoint monitoring | AI app installations on managed devices | Web-based tools, personal devices | Weeks |
The most effective approach combines browser extension monitoring, DNS monitoring, and OAuth audit. Together, these three methods cover web-based usage, network-level traffic, and application integrations. For detailed analysis of shadow AI risks and mitigation strategies, see our shadow AI risk guide. For building the governance framework that addresses these findings, read our AI policy and governance guide.
Frequently Asked Questions
Is using ChatGPT at work illegal?
Using ChatGPT at work is not illegal by itself. However, entering regulated data (PII, PHI, financial records) into a personal AI account can violate GDPR, HIPAA, or industry regulations. The legal risk depends on what data is shared and what regulations apply to your organization.
Can employers monitor which AI tools employees use?
Yes, on company-managed devices and networks. Employers can monitor browser activity, network traffic, and application usage on corporate assets. Monitoring personal devices requires explicit consent and varies by jurisdiction. Transparency about monitoring practices is both legally required in many regions and practically necessary for employee trust.
What is the biggest risk of unauthorized AI tool usage?
Data exposure. When employees paste proprietary or regulated data into personal AI accounts, that data may be used for model training, stored in jurisdictions that violate data residency requirements, or exposed through future security breaches at the AI provider. The data leaves your control permanently.
Should organizations block all unauthorized AI tools?
Blocking alone drives usage underground. The most effective approach combines blocking high-risk tools at the network level with providing approved alternatives for common use cases. Employees who have a good approved tool rarely seek unauthorized options.
How many AI tools does the average employee use?
Research suggests the average knowledge worker uses 2-4 AI tools regularly, with at least one being unauthorized. In technology and marketing roles, the number is higher: 4-7 tools, with 2-3 being unauthorized. The gap between approved and actual usage grows as AI tools become more specialized.
Discover Your Shadow AI Exposure
PolicyGuard identifies every unauthorized AI tool in your organization and quantifies the data risk. Get visibility before your next audit.
Start free trial








