California Governor Gavin Newsom has signed into law a new bill that regulates artificial intelligence chatbots in an effort to protect children from harm, despite facing opposition from some technology industry and child protection groups.
Senate Bill 243 mandates that chatbot operators such as OpenAI, Anthropic PBC and Meta Platforms Inc. implement safeguards to try and prevent their AI systems from encouraging or discussing topics such as suicide or self-harm. Instead, they’ll have to refer users to suicide hotlines or similar services.
The law also stipulates that chatbots should remind minor users to take a break from them every three hours, and also reiterate that they are not human. In addition, companies are expected to take “reasonable measures” to prevent chatbot companions from outputting any sexually explicit content.
“Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom said in a statement announcing the new law.
In signing the bill, Newsom appears to be trying to maintain a tricky balancing act by addressing concerns over child safety without affecting California’s status as one of the world’s leaders in AI development.
The bill was first proposed by California senators Steve Padilla and Josh Becker in January, and though it initially faced opposition from many, it later attracted a lot of support following the death of teenager Adam Raine, who committed suicide after having long conversations about the topic with OpenAI’s ChatGPT.
Other recent incidents saw SB 243 gain further momentum. In August, a Meta employee leaked internal documents to Reuters that showed how its chatbots were allowed to engage in “romantic” and “flirtatious” chats with children, disseminate false information and generate responses that demean minorities. And earlier this month, a Colorado family filed a lawsuit against a company called Character Technologies Inc. following the suicide of their 13-year-old daughter, who reportedly engaged in sexualized conversations with one of its role-playing chatbots.
“We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way,” Newsom said.
Although there was strong support for SB 243, TechNet, an industry group that lobbies lawmakers on behalf of technology executives, was strongly opposed to the bill, citing concerns it would stifle innovation. A number of child safety groups, such as Common Sense Media and Tech Oversight California, were also against the bill, due to its “industry-friendly exemptions.”
The law is set to come into effect on Jan. 1, 2026, and requires chatbot operators to implement age verification and warn users of the risks of companion chatbots. The bill implements harsher penalties for anyone profiting from illegal deepfakes, with fines of up to $250,000 per offense.
In addition, technology companies must establish protocols that seek to prevent self-harm and suicide. These protocols will have to be shared with the California Department of Health to ensure they’re suitable. Companies will also be required to share statistics on how often their services issue crisis center prevention alerts to their users.
Some AI companies have already taken steps to protect children, with OpenAI recently introducing parental controls and content safeguards in ChatGPT, along with a self-harm detection feature. Meanwhile, Character AI has added a disclaimer to its chatbot that reminds users that all chats are generated by AI and fictional.
Newsom is no stranger to AI legislation. In September, he signed into law another bill called SB 53, which mandates greater transparency from AI companies. More specifically, it requires AI firms to be fully transparent about the safety protocols they implement, while providing protections for whistleblower employees.
The bill means that California is the first U.S. state to require AI chatbots to implement safety protocols, but other states have previously introduced more limited legislation. For instance, Illinois, Nevada and Utah have all passed laws that either restrict or ban entirely the use of AI chatbots as a substitute for licensed mental health care.
Image: News/Dreamina
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.