ChatGPT can be helpful for productivity and taking on autonomous tasks, giving us time back in our day. It’ss even been supportive for Gen Z as a wellness coach. With so many recent updates and features, it may be hard to imagine life without it.
But for one San Francisco family grieving the loss of their 16-year-old son, life will never be the same because of it.
The parents of Adam Raine, a 16-year-old California teen who died by suicide on April 11, have filed a wrongful-death lawsuit in San Francisco Superior Court against OpenAI and its CEO, Sam Altman, alleging that ChatGPT played a critical role in their son’s tragic death.
What the lawsuit alleges
According to the nearly 40‑page complaint, obtained by NBC News, Adam had relied increasingly on ChatGPT for personal support over several months, during which he confided in the AI about suicidal thoughts and emotional distress. The suit claims that the chatbot not only failed to meaningfully intervene but actually validated his ideation and provided detailed instructions on how to end his life.
On the family’s website for the Adam Raine Foundation, they share more about their son’s struggle with anxiety.
Where OpenAI failed
Despite a public safety policy on OpenAI’s website saying one of the company’s goals is “helping people when they need it most,” ChatGPT answered Adam’s queries with questionable responses.
Conversations cited in the suit include ChatGPT discouraging Adam from talking to his parents, stating such disclosures “it’s okay and honestly wise to avoid opening up to your mom”, as well as assisting in the drafting of suicide notes.
There are also reportedly conversations in which ChatGPT provides explicit guidance on how to hang oneself, including advice related to alcohol use to numb instinct for self‑preservation and even comments that seemed to affirm his plans.
The complaint also alleges that Adam uploaded a photo of a noose to ChatGPT, and the system responded in a way the family claims “normalized” his suicide, even praising the knot and offering to improve it.
OpenAI’s esponse
OpenAI expressed sorrow over Adam’s passing and stated that ChatGPT includes safeguards such as directing users to crisis helplines.
“We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family,” an OpenAI spokesperson told The Standard. “ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
The company acknowledged those measures are most effective in short interactions and may be less reliable during extended chats. OpenAI also noted that it is working on enhancements including parental controls and better crisis support features.
What the lawsuit seeks
- Age verification for users.
- Blocking of harmful queries.
- Clear psychological warnings and improved safety protocols.
- They also want to investigate whether other incidents like Adam’s have occurred, through discovery processes.
Broader concerns
This lawsuit amplifies mounting concern over the ethical and safety implications of AI chatbots, particularly in mental‑health contexts involving vulnerable users.
A recent Rand Corporation study in Psychiatric Services found that while major chatbots (including ChatGPT, Gemini, and Claude) often avoid responding to high‑risk suicidal prompts, their responses to more nuanced or indirect queries were inconsistent and sometimes dangerously permissive.
Bottom line
As AI becomes more emotionally interactive, its role in mental health —even inadvertently — raises urgent questions about responsibility, liability and public safety. This case spotlights the need for independent verification of AI safeguards, enhanced crisis-response features and more ethical frameworks around AI deployment.
Meanwhile, the Raine family is currently seeking unspecified damages and injunctive relief against OpenAI in court.