By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: OpenAI doubles down on ChatGPT safeguards as it faces wrongful death lawsuit
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > OpenAI doubles down on ChatGPT safeguards as it faces wrongful death lawsuit
News

OpenAI doubles down on ChatGPT safeguards as it faces wrongful death lawsuit

News Room
Last updated: 2025/08/28 at 4:19 AM
News Room Published 28 August 2025
Share
SHARE

OpenAI reiterated existing mental health safeguards and announced future plans for its popular AI chatbot, addressing accusations that ChatGPT improperly responds to life-threatening discussions and facilitates user self-harm.

The company published a blog post detailing its model’s layered safeguards just hours after it was reported that the AI giant was facing a wrongful death lawsuit by the family of California teenager Adam Raine. The lawsuit alleges that Raine, who died by suicide, was able to bypass the chatbot’s guardrails and detail harmful and self-destructive thoughts, as well as suicidal ideation, which was periodically affirmed by ChatGPT.

SEE ALSO:

Dead teen’s family files wrongful death suit against OpenAI and ChatGPT

ChatGPT hit 700 million active weekly users earlier this month.

“At this scale, we sometimes encounter people in serious mental and emotional distress. We wrote about this a few weeks ago and had planned to share more after our next major update,” the company said in a statement. “However, recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us, and we believe it’s important to share more now.”

Currently, ChatGPT’s protocols include a series of stacked safeguards that seek to limit ChatGPT’s outputs according to specific safety limitations. When they work appropriately, ChatGPT is instructed not to provide self-harm instructions or comply with continued prompts on that subject, instead escalating mentions of bodily harm to human moderators and directing users to the U.S.-based 988 Suicide & Crisis Lifeline, the UK Samaritans, or findahelpline.com. As a federally-funded service, 988 has recently ended its LGBTQ-specific services under a Trump administration mandate — even as chatbot use among vulnerable teens grows.

Mashable Light Speed

In light of other cases in which isolated users in severe mental distress confided in unqualified digital companions, as well as previous lawsuits against AI competitors like Character.AI, online safety advocates have called on AI companies to take a more active approach to detecting and preventing harmful behavior, including automatic alerts to emergency services.

OpenAI said future GPT-5 updates will include instructions for the chatbot to “de-escalate” users in mental distress by “grounding the person in reality,” presumably a response to increased reports of the chatbot enabling states of delusion. OpenAI said it is exploring new ways to connect users directly to mental health professionals before users report what the company refers to as “acute self harm.” Other safety protocols could include “one-click messages or calls to saved emergency contacts, friends, or family members,” OpenAI writes, or an opt-in feature that lets ChatGPT reach out to emergency contacts automatically.

SEE ALSO:

Explaining the phenomenon known as ‘AI psychosis’

Earlier this month, OpenAI announced it was upgrading its latest model, GPT-5, with additional safeguards intended to foster healthier engagement with its AI helper. Noting criticisms that the chatbot’s prior models were overly sycophantic — to the point of potentially deleterious mental health outcomes — the company said its new model was better at recognizing mental and emotional distress and would respond differently to “high stakes” questions moving forward. GPT-5 also includes gentle nudges to end sessions that have gone on for extended periods of time, as individuals form increasingly dependent relationships with their digital companions.

Widespread backlash ensued, with GPT-4o users demanding the company reinstate the former model after losing their personalized chatbots. OpenAI CEO Sam Altman quickly conceded and brought back GPT-4o, despite previously acknowledging a growing problem of emotional dependency among ChatGPT users.

In the new blog post, OpenAI admitted that its safeguards degraded and performed less reliably in long interactions — the kinds that many emotionally dependent users engage in every day — and “even with these safeguards, there have been moments when our systems did not behave as intended in sensitive situations.”

If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a list of international resources.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article HP cites strong demand for AI PCs as it beats expectations on revenue, but investors aren’t impressed – News
Next Article Take Action on This Deal: Snap 23% Off the GoPro Hero13 Black Camera
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Apple M2 Pro / Max / Ultra Device Trees Under Review For The Linux Kernel
Computing
Grab a Roku Streaming Stick Plus wile it’s at its lowest price ever
News
Inconsistency between TechWoven & iPhone 17 crossbody strap makes everything look fake
News
Bose SoundLink Plus vs Flex 2nd Gen: What’s the difference?
Gadget

You Might also Like

News

Grab a Roku Streaming Stick Plus wile it’s at its lowest price ever

3 Min Read
News

Inconsistency between TechWoven & iPhone 17 crossbody strap makes everything look fake

1 Min Read
News

The Logistics Industry Thinks Autonomous Vehicles Are the Future

1 Min Read
News

Galaxy S25 FE leak shows off every angle and color ahead of launch

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?