California and Delaware warned OpenAI on Friday that they have “serious concerns” about the AI company’s safety practices in the wake of several recent deaths reportedly connected to ChatGPT.
In a letter to the OpenAI board, California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings noted they recently met with the firm’s legal team and “conveyed in the strongest terms that safety is a non-negotiable priority, especially when it comes to children.”
The pair’s latest missive comes after the family of a 16-year-old boy sued OpenAI last Tuesday, alleging ChatGPT encouraged him to take his own life. The Wall Street Journal also reported last week that the chatbot fueled a 56-year-old Connecticut man’s paranoia before he killed himself and his mother in August.
“The recent deaths are unacceptable,” Bonta and Jennings wrote. “They have rightly shaken the American public’s confidence in OpenAI and this industry.”
“OpenAI – and the AI industry – must proactively and transparently ensure AI’s safe deployment,” they continued. “Doing so is mandated by OpenAI’s charitable mission, and will be required and enforced by our respective offices.”
The state attorneys general underscored the need to center safety as they continue discussions with the company about its restructuring plans.
“It is our shared view that OpenAI and the industry at large are not where they need to be in ensuring safety in AI products’ development and deployment,” Bonta and Jennings said.
“As we continue our dialogue related to OpenAI’s recapitalization plan, we must work to accelerate and amplify safety as a governing force in the future of this powerful technology,” they added.
OpenAI, which is based in California and incorporated in Delaware, previously has engaged with the pair on its efforts to alter the company’s corporate structure.
It initially announced plans to fully transition the firm into a for-profit company without nonprofit oversight in December. However, it later walked back the push, agreeing to keep the nonprofit in charge, citing discussions with the attorneys general and other leaders.
In the wake of recent reports about ChatGPT-connected deaths, OpenAI announced Tuesday it was adjusting how its chatbots respond to people in crisis and enacting stronger protections for teens.
OpenAI is not the only tech company under fire lately over its AI chatbots. Reuters reported last month that a Meta policy document featured examples suggesting its chatbots could engage in “romantic or sensual” conversations with children.
The social media company said it has since removed this language. It also later told News it is updating its policies to restrict certain topics for teen users, including discussions of self-harm, suicide, disordered eating or potentially inappropriate romantic conversations.