Don’t miss out on our latest stories. Add PCMag as a preferred source on Google.
For the first time, OpenAI is revealing a rough estimate of how many people talk to ChatGPT about suicide and other problematic topics.
On Monday, the company published a blog post about “strengthening” ChatGPT’s responses to sensitive conversations amid concerns the AI program can mistakenly steer teenage users toward self-harm and other toxic behavior. Some have also complained to regulators about the chatbot allegedly worsening people’s mental health issues.
To tackle the problem, OpenAI said it was necessary to measure the scale of the problematic conversations when ChatGPT has over 800 million active weekly users.
This Tweet is currently unavailable. It might be loading or has been removed.
Overall, OpenAI found that “mental health conversations that trigger safety concerns, like psychosis, mania, or suicidal thinking, are extremely rare.” But because ChatGPT’s user base is so vast, even a small percentage can represent hundreds of thousands of people.
On self-harm, the company’s initial analysis “estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent.” That translates to about 1.2 million users. In addition, OpenAI found that 0.05% of ChatGPT messages contained “explicit or implicit indicators of suicidal ideation or intent.”
The company also looked at how many users exhibit symptoms “of serious mental health concerns, such as psychosis and mania, as well as less severe signals, such as isolated delusions.” About “0.07% of users active in a given week,” or around 560,000 users, exhibited possible “signs of mental health emergencies related to psychosis or mania,” OpenAI said.
Meanwhile, 0.15% of the active weekly users showed indications of an emotional reliance on ChatGPT. In response, the company says it updated the chatbot with the help of more than 170 mental health experts. This includes programming ChatGPT to advocate for connections with real people if a user mentions preferring to talk with AI over humans. ChatGPT will also try to gently push back on user prompts clearly out of touch with reality.
Recommended by Our Editors
“Let me say this clearly and gently: No aircraft or outside force can steal or insert your thoughts,” ChatGPT said in one example, according to OpenAI.
The company’s research shows the new ChatGPT “now returns responses that do not fully comply with desired behavior under our taxonomies 65% to 80% less often across a range of mental health-related domains.” The new model, which rolls out today, also promises to nudge people to seek professional help when necessary. But some users are already reporting the new ChatGPT reacts too easily to any sign the user is exhibiting signs of mental distress.
“I had to move over to Gemini because I felt so gaslit by ChatGPT. It kept accusing me of being in crisis when I most certainly was not,” wrote one user on Reddit.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
About Our Expert
Michael Kan
Senior Reporter
Experience
I’ve been a journalist for over 15 years. I got my start as a schools and cities reporter in Kansas City and joined PCMag in 2017, where I cover satellite internet services, cybersecurity, PC hardware, and more. I’m currently based in San Francisco, but previously spent over five years in China, covering the country’s technology sector.
Since 2020, I’ve covered the launch and explosive growth of SpaceX’s Starlink satellite internet service, writing 600+ stories on availability and feature launches, but also the regulatory battles over the expansion of satellite constellations, fights with rival providers like AST SpaceMobile and Amazon, and the effort to expand into satellite-based mobile service. I’ve combed through FCC filings for the latest news and driven to remote corners of California to test Starlink’s cellular service.
I also cover cyber threats, from ransomware gangs to the emergence of AI-based malware. Earlier this year, the FTC forced Avast to pay consumers $16.5 million for secretly harvesting and selling their personal information to third-party clients, as revealed in my joint investigation with Motherboard.
I also cover the PC graphics card market. Pandemic-era shortages led me to camp out in front of a Best Buy to get an RTX 3000. I’m now following how President Trump’s tariffs will affect the industry. I’m always eager to learn more, so please jump in the comments with feedback and send me tips.
Read Full Bio
