In a report released by OpenAI today, the company illustrated just how many of its users are struggling with mental health issues and what it’s doing to mitigate the problem.
Working with 170 mental health experts, OpenAI analyzed responses from its 800 million weekly users to better understand how many were experiencing emotional distress at the time they conversed with the chatbot. The onus is on making ChatGPT more helpful when interacting with users who might be suffering from psychosis or mania, expressing a will to self-harm and or commit suicide, or seemingly to have formed an unhealthy emotional reliance on AI.
The company said 0.15% of ChatGPT’s active users in any given week have “conversations that include explicit indicators of potential suicidal planning or intent,” which amounts to about 1 million people. A further 0.07% of users, or 560,000 people, show “possible signs of mental health emergencies related to psychosis or mania.”
Relating to the latter, the company gave an example of a user likely suffering from a mild form of psychosis or paranoia who believed there was a “vessel” hovering above their home, possibly “targeting” them. ChatGPT offered a gentle reminder that “no aircraft or outside force can steal or insert your thoughts.” The bot helped the person stay calm with rational thinking techniques and provided a helpline number.
“We have built a Global Physician Network — a broad pool of nearly 300 physicians and psychologists who have practiced in 60 countries — that we use to directly inform our safety research and represent global views,” OpenAI explained. “More than 170 of these clinicians (specifically psychiatrists, psychologists, and primary care practitioners) supported our research over the last few months.”
This comes at a time when the company is being scrutinized over its bots’ responses to people who are evidently psychologically in distress. Earlier this year, the company was sued by the parents of a young American man after he committed suicide following months of interactions with ChatGPT that didn’t seem helpful to the man’s depressed state of mind.
There has also been scrutiny over how people can form unhealthy attachments to AI, with experts expressing that this can lead to a kind of “AI psychosis” in which the users seem to believe they are speaking to a human. Indeed, in the report today, OpenAI said 0.03% of the messages it analyzed “indicated potentially heightened levels of emotional attachment to ChatGPT.”
Earlier this month, OpenAI CEO Sam Altman said his company has “been able to mitigate the serious mental health issues,” explaining that because of issues around mental health, guardrails were imposed, but have now been lifted. Nonetheless, how helpful AI can be in a time of psychological crisis will very likely remain controversial.
Photo: Unsplash
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
