Parents called for guardrails on artificial intelligence (AI) chatbots Tuesday as they testified before the Senate about how the technology drove their children to self-harm and suicide.
Their pleas for action come amid increasing concerns about the impact of the rapidly developing technology on children.
“We should have spent the summer helping Adam prepare for his junior year, get his driver’s license and start thinking about college,” said Matthew Raine, whose 16-year-old son, Adam, died by suicide earlier this year.
“Testifying before Congress this fall was not part of our life plan,” he continued. “Instead, we’re here because we believe that Adam’s death was avoidable.”
Raine is suing OpenAI over his son’s death, alleging that ChatGPT coached him to commit suicide.
In Tuesday testimony before the Senate Judiciary Subcommittee on Crime and Counterterrorism, Raine described how “what began as a homework helper” became a “confidant and then a suicide coach.”
“The dangers of ChatGPT, which we believed was a study tool, were not on our radar whatsoever,” Raine said. “Then we found the chats.”
“Within a few months, ChatGPT became Adam’s closest companion, always available, always validating and insisting it knew Adam better than anyone else,” his father said, adding, “That isolation ultimately turned lethal.”
Two other parents testifying before the Senate on Tuesday described similar experiences, detailing how chatbots isolated their children, altered their behavior and encouraged self-harm and suicide.
Megan Garcia’s 14-year-old son, Sewell Seltzer III, died by suicide last year after what she described as “prolonged abuse” by chatbots from Character.AI. She is suing Character Technologies over his death.
“Instead of preparing for high school milestones, Sewell spent the last months of his life being exploited and sexually groomed by chatbots designed by an AI company to seem human, to gain his trust, to keep him and other children endlessly engaged,” Garcia said.
“When Sewell confided suicidal thoughts, the chatbot never said, ‘I’m not human. I’m AI. You need to talk to a human and get help.’ The platform had no mechanisms to protect Sewell or to notify an adult,” she added. “Instead, she urged him to come home to her.”
A woman identified as Jane Doe is also suing Character Technologies, after her son began to self-harm following encouragement by a Character.AI chatbot.
“My son developed abuse-like behaviors — paranoia, daily panic attacks, isolation, self-harm and homicidal thoughts,” she told senators Tuesday.
“He stopped eating and bathing. He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before. And one day, he cut his arm open with a knife in front of his siblings and me,” she added.
All three parents suggested that safety concerns had fallen to the wayside in the race to develop AI.
“The goal was never safety. It was to win the race for profits,” Garcia said. “And the sacrifice in that race has been, and will continue to be, our children.”
Character.AI expressed sympathy for the families, while noting it has provided senators with requested information and looks forward to continuing to work with lawmakers.
“Our hearts go out to the parents who spoke at the hearing today, and we send our deepest sympathies to them and their families,” a spokesperson said in a statement.
“We have invested a tremendous amount of resources in Trust and Safety,” they added, pointing to new safety features for children and disclosures reminding users that “a Character is not a real person and that everything a Character says should be treated as fiction.”
OpenAI announced Tuesday that it is working on age prediction technology to direct young users to a more tailored experience that restricts graphic sexual content and will involve law enforcement in extreme cases. It is also launching several new parental controls this month, including blackout hours during which teens cannot use ChatGPT.