A disturbing new report finds ChatGPT and Copilot are already the biggest source of workplace data leaks — here’s what we know
As more companies adopt generative AI like ChatGPT, Microsoft Copilot and Claude to improve productivity and workflow, they are discovering these tools are exposing company secrets at an alarming rate.
According to a new Cyera report highlights that AI chats are now the No. 1 cause of data leaks in the workplace, surpassing both cloud storage and email for the first time. And the scariest part? Most of it is happening so far under the radar that companies aren’t even noticing.
Security threats from employees, not hackers
The research shows that nearly 50% of enterprise employees are using generative AI at work, often by pasting sensitive information such as financial information, personally identifiable data and even strategy docs — directly into AI chatbots.
This type of information should never be shared with AI, so why are users doing it? In most cases, they happen through personal, unmanaged AI accounts like ChatGPT or Gemini, making them invisible to corporate security systems, yet 77% of these interactions involve real company data.
Because the data is shared through copy/pasted actions within chat windows, not file uploads directly, they bypass traditional data-loss prevention tools entirely.
Most security platforms are built to catch file attachments, suspicious downloads or outbound emails. But AI conversations look like normal web traffic — yes, even when they contain confidential info.
A 2025 LayerX enterprise report found that 67% of AI interactions happen on personal accounts, which means IT teams can’t monitor or restrict them. Because IT teams are unable to monitor personal logins or provide oversight on personal accounts, AI becomes a blind spot.
How to protect your company — and yourself
The reports aren’t suggesting to ban AI outright, instead they are a wakeup call to companies and users to tighten controls, improve visibility and provide critical oversight. Here’s what the researchers suggest:
- Block access to generative AI from personal accounts
- Require single sign-on (SSO) for all AI tools used on company devices
- Monitor for sensitive keywords and clipboard activity
- Treat chat interactions with the same scrutiny as file transfers
This may seem obvious, but if you are an employee, do not paste anything into an AI chat that you wouldn’t post publicly on the internet.
Bottom line
AI is still fairly new in the workplace setting, so employees are learning how to use the tools while also juggling what it should be used for. This can get sticky because most employees would not intentionally leak data. For instance, a simple prompt like “Summarize this report for me” could seem to employee as if they are using AI to get ahead and be more productive, yet it could put an entire company at risk if the wrong document is pasted into the chat.
In the race to boost productivity with AI, one innocent copy-paste could be all it takes to expose your company’s secrets. Knowing the threat is there, is the first step in boosting security.
Follow Tom’s Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!
More from Tom’s Guide
Back to Laptops