Being the most popular AI chatbot on the market certainly puts a target on your back, with a new study revealing that OpenAI has suffered more than a thousand security breaches.
On top of that, the study found that five out of the top 10 large-language models (LLMS) on the market have experienced security breaches, further outlining the serious security concerns associated with the generative AI industry.
Companies face mounting pressure to introduce rigorous measures governing how employees use AI tools. As history shows us, data breaches can be extremely costly – with the impacts often terminal for many businesses.
New Study Analyzes AI and Cybersecurity
A new study from Cybernews aimed to look at how effective businesses are when it comes to cybersecurity.
In the study, it was revealed that half of the biggest LLM providers on the market have experienced data breaches. More importantly, OpenAI — the company behind the popular ChatGPT platform — was revealed to have been breached 1,140 times.
This just in! View
the top business tech deals for 2025 👨💻
To evaluate their online security protocols, the company created the “Business Digital Index,” which assesses custom scans, IoT search engines, IP, and domain name reputation. The LLM providers in question include OpenAI, Claude, Perplexity, and DeepSeek, among others.
AI Businesses Deserting Basic Duties
Given the exorbitant cost of breaches and the resources available to these companies, why the heck aren’t they making a cybersecurity more of a priority?
For one, the study found that almost half (45.4%) of sensitive data prompts are sent from personal accounts rather than business ones, meaning that they’re not safeguarded by the same corporate cybersecurity protocols – and corporate data is more exposed as a result.
To make matters worse, every LLM provider in the study displayed SSL/TLS configuration vulnerabilities, which can expose data to interception via man-in-the-middle attacks. These findings are backed up by a recent study, which found that most cybersecurity breaches are preventable, and businesses are simply not doing enough.
Senior Leaders Should Face Scrutiny
The research poses some worrying questions for business leaders everywhere. With AI increasingly a key cornerstone of the modern workplace, tech leaders face growing pressure to properly vet the LLM providers that their business is using, as well as to adequately train their staff on how to safely use these platforms.
According to our Impact of Technology on the Workplace report, just 27% of senior leaders say that their organization provides safeguards to restrict which information they can input into chatbots. This is backed up by the Cybernews report, which paints a pretty lawless picture of employees’ chatbot usage.
What is plainly clear is that businesses are not taking their cybersecurity practices seriously enough. A shocking 98% of business leaders can’t identify all the signs of a phishing scam, which demonstrates that the problem is rife right across the business. If we’re to turn the tide on data breaches, action is required from the bottom to the top.
The post Study: OpenAI Has Been Breached More Than 1000 Times appeared first on Tech.co.