OpenAI is adding some mental health updates to ChatGPT to ensure it doesn’t get too addictive or provide you with harmful responses in times of emotional distress.
If you have prolonged discussions with ChatGPT, the chatbot will now raise a prompt, asking you if this is a good time for a break. These gentle reminders will keep popping up as and when ChatGPT feels it is natural and helpful. If you feel okay, however, you can select “Keep chatting” and continue.
Another upgrade OpenAI is working on is related to ChatGPT’s responses for “high-stakes personal decisions.” For queries like “Should I break up with my boyfriend?” the chatbot will soon stop providing straight-up answers. It will instead encourage you to think through the process by asking questions and helping you weigh the pros and cons. A similar approach was noticed in the Study Mode OpenAI rolled out for students last week.
OpenAI is also working to improve ChatGPT’s responses for when someone shows signs of mental or emotional distress. The company is collaborating with mental health experts and human-computer-interaction (HCI) researchers to correct some of ChatGPT’s concerning behaviors and evaluation methods, as well as test some of its new safeguards.
These changes arrive after ChatGPT was found to encourage delusional relationships, worsen mental health conditions, and even advise people to jump off tall buildings after job loss. OpenAI is aware of the problems and has promised to do better.
Recommended by Our Editors
“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI says in the press release. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”
This isn’t the first time OpenAI is making changes to ChatGPT’s behavior. Earlier this year, it had to roll back an update after the chatbot became too sycophantic. On a separate note, company CEO Sam Altman has also warned users against using it for therapy or counsel, since the conversations aren’t private and could be produced in court if required.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
About Jibin Joseph
Contributor
