Parents will be notified if their children appear in ‘acute distress’ while using ChatGPT.
Exactly how the feature will work has not yet been revealed, but Open AI announced the move in a blog post yesterday setting out ‘more helpful ChatGPT experiences for everyone’.
Expanding on the protections they announced last week, they said they had been consulting with experts in mental health, youth development, and human-computer interaction.
The move comes after a teenager took his own life after speaking with the artificial intelligence about suicide, with his parents launching a lawsuit on finding the chats after his death.
Disturbing ‘final messages’ show how the chatbot appeared to tell Adam Raine not to leave clues about what he was planning for his family, though Open AI say the chat logs do not show the full context.
This incident is just one example of interactions where the chatbot, and AI more generally, has come under scrutiny.

Some psychiatrists have reported an uptick in psychosis patients, saying use of chatbots can be a contributing factor.
The blog on the ChatGPT developments said that while there have always been controls built in, meant to stop harmful information of self harm, for example, guardrails can be bypassed more easily in longer interactions.
It says that conversations showing red flags will now be routed automatically to reasoning models like GPT-5 and o3, which are built ‘to spend more time thinking’ including looking at the context before answering. Tests proved such models ‘more consistently follow and apply safety guidelines’, they said.
Referring to teenagers as ‘the first AI natives’ who are growing up with these tools ‘part of daily life’, they said that within the next month there would be additional controls available for their parents.
Adults will soon be able to link their account with their teen’s account, with the minimum age to use the platform remaining 13.
Age appropriate model behaviour will be switched on by default, while parents will be able to switch off features including memory and chat history.
Most strikingly, the blog says they will ‘receive notifications when the system detects their teen is in a moment of acute distress’.
They said that ‘expert input will guide this feature to support trust between parents and teens’.
Users were already reminded to take breaks if they had been speaking with the app for a long session.
Open AI said they would share their progress over the next 120 days, and that these steps were ‘only the beginning’.
ChatGPT is not the only AI to face questions over its relationships with humans.
How we come to interact with artificial intelligence will be one of the defining questions of the next decades, futurist Nell Watson told Metro after Elon Musk’s chatbot Grok went viral for having phone sex with users.
She said: ‘There are so many lonely people out there, so many people that don’t have an opportunity to form strong bonds with others, particularly in romantic relationships.
‘These systems just should gently go at arm’s length a little bit, be a little more distant, a little less interesting if somebody is is using it too much as a social crutch.’
Get in touch with our news team by emailing us at [email protected].
For more stories like this, check our news page.
MORE: Letting agents are increasingly using AI — and it’s hurting ‘desperate’ renters
MORE: What to do if your parents always think they know what’s best for you
MORE: I can’t get over the price of my son’s school uniform