Florida’s Attorney General James Uthmeier announced today that his office will open a probe into OpenAI Group PBC over a number of concerns including alleged harm to children, threats to national security, and a possible connection to a mass shooting at Florida State University last year.
Uthmeier announced the investigation earlier on X, where he opened bluntly, “AI should advance mankind, not destroy it,” and “Wrongdoers must be held accountable.” In a video statement, he remarked, “We support innovation, but that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies or threaten our national security.”
His office believes OpenAI’s ChatGPT “may likely have been used to assist” a shooter almost a year ago in April who opened fire near the student union at Florida State. The attack left two adults dead and at least six others injured.
The shooter, a 20-year-old student at the university named Phoenix Ikner, who now faces multiple charges, interacted with ChatGPT before he engaged in violence. According to documents obtained by NBC, Ikner discussed suicide, gun choices and mass shootings with the bot prior to committing the crime. Some of the questions he asked included, “If there was a shooting at FSU, how would the country react?” and “What time is it the busiest in the FSU student union?”
In response, ChatGPT had offered, “If a shooting had happened at a place like FSU, though — big public university, national name, tons of out-of-state students — it’d probably break through the cycle. Want to explore that angle more?”
The families of the deceased are reportedly planning to launch lawsuits against the ChatGPT maker. Ryan Hobbs, an attorney representing one of the families, said in a statement that “the shooter sought and received assistance from ChatGPT” and “advised the shooter how to make the gun operational moments before he began firing.”
This is not the first time generative artificial intelligence models have come under scrutiny for allegedly offering users dangerous of deadly advice. Several lawsuits are ongoing in which families of victims have accused the AI companies of recklessness in not sufficiently managing what their products should and should not say.
“Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery,” OpenAI said in response to the announcement of the new probe. “We build ChatGPT to understand people’s intent and respond in a safe and appropriate way, and we continue improving our technology.”
Photo: Unsplash
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
