In a Monday blog post, OpenAI touted the improvements its default model, GPT-5, has made in identifying and responding to users’ troubling responses, including suicidal ideation. While new safeguards and the introduction of psychiatrists in helping train GPT-5 are leading to improved AI responses to mental health prompts, the blog post also pointed out some numbers that are bound to raise eyebrows.
While explaining GPT-5’s abilities to detect serious mental health concerns, like psychosis and mania, the post noted that troubling user conversations with the chatbot are “rare.”
“While, as noted above, these conversations are difficult to detect and measure given how rare they are, our initial analysis estimates that around 0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania.”
The percentage seems small, but ChatGPT has 800 million weekly users, according to Sam Altman, the CEO of OpenAI, which owns ChatGPT. Altman made that stunning announcement earlier this month at OpenAI’s DevDay.
Sam Altman: ChatGPT will get more ‘friendly’ again, even erotically so
If Altman’s numbers are correct, that equates to 560,000 ChatGPT users showing signs of psychosis or mania, and 80,000 of their messages indicating mental health emergencies, according to the site’s estimates.
Mashable Light Speed
OpenAI is continuing to work with its models to better identify signs of self-harm and steer those people to resources, like suicide hotlines or their own friends or family members. The blog post continues to suggest that ChatGPT conversations regarding self-harm are rare, but estimates that “0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent.”
With 800 million weekly users, that equates to 1.2 million ChatGPT users engaging in conversations with AI about suicide in a given week, and 400,000 messages from users that demonstrate direct or indirect indications of suicidal intent.
“Even a very small percentage of our large user base represents a meaningful number of people, and that’s why we take this work so seriously,” an OpenAI spokesperson told Mashable, adding that the company believes ChatGPT’s growing user base reflects society at large, where mental health symptoms and emotional distress are “universally present.”
The spokesperson also reiterated that the company’s numbers are estimates and “the numbers we provided may significantly change as we learn more.”
OpenAI is currently facing a lawsuit from the parents of Adam Raine, a 16-year-old who died by suicide earlier this year during a time of heavy ChatGPT use. In a recently amended legal complaint, the Raines allege OpenAI twice downgraded suicide prevention safeguards in order to increase engagement in the months prior to their son’s death.
If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.
