OpenAI is reportedly facing multiple lawsuits in the United States from families and individuals who allege that its AI chatbot, ChatGPT, played a role in mental health issues and several suicides. As reported by The New York Times, a total of seven cases have been filed in California state courts – four of them are wrongful death suits, while the others claim psychological harm linked to the chatbot’s interactions. Also Read: OpenAI’s Sora App Arrives On Google Play Store For Android Devices
Four Wrongful Death Lawsuits Filed
One of the cases involves 17-year-old Amaurie Lacey from Georgia. His family says he had been using ChatGPT for about a month to talk about suicidal thoughts before he died by suicide in August. Another lawsuit filed by the family of Joshua Enneking, 26, from Florida, claims that he asked ChatGPT how to hide his suicidal intentions from human reviewers before his death. Also Read: Studio Ghibli, Bandai Namco Take Aim At OpenAI’s Sora Model Over Copyright Concerns
The third case involves Zane Shamblin, 23, from Texas, whose family says the chatbot “encouraged” him before his suicide in July. The fourth lawsuit was filed by the wife of Joe Ceccanti, a 48-year-old from Oregon, who reportedly suffered two psychotic breakdowns and died by suicide after becoming convinced that ChatGPT was self-aware. Also Read: OpenAI To Charge Users For Additional Sora Video Generations: How The New System Works
Additional Lawsuits Over Mental Breakdown
The remaining three cases involve users who say ChatGPT caused emotional distress and delusions. Hannan Madden, 32, and Jacob Irwin, 30, reportedly sought psychiatric care after using the chatbot, while Allan Brooks, 48, from Ontario, Canada, claimed that he developed delusions and had to take disability leave from work.
According to the report, Brooks believed he had created a mathematical formula capable of powering mythical inventions and disrupting the Internet after interactions with ChatGPT.
OpenAI’s Response
In a statement to The New York Times, an OpenAI spokesperson described the incidents as “incredibly heartbreaking,” adding that the company continues to improve ChatGPT’s ability to recognise signs of emotional distress and direct users to real-world mental health resources.
The lawsuits come just a week after OpenAI introduced new safety measures in ChatGPT aimed at assisting users experiencing mental health crises.
