An OpenAI executive responsible for artificial intelligence safety has warned that the next generation of the company’s large language models could be used to facilitate the development of deadly bioweapons by individuals with relatively little scientific knowledge.
OpenAI Head of Safety Systems Johannes Heidecke made the claim in an interview with Axios, saying that he anticipates its upcoming models will trigger what’s known as a “high-risk classification” under the company’s preparedness framework – a system it has set up to evaluate the risks posed by AI.
He told Axios that he’s expecting “some of the successors of our o3 reasoning model to hit that level.”
OpenAI said in a blog post that it has been gearing up its safety tests to try and mitigate the risk its models might be abused by someone looking to create biological weapons. It admits it’s concerned that unless proper systems for mitigation are put in place, its models could become capable of “novice uplift,” enabling people with only limited scientific knowledge to create lethal weapons.
Heidecke said OpenAI isn’t worried that AI might be used to create weapons that are completely unknown or haven’t existed before, but about the potential to replicate some of the things that scientists are already very familiar with.
One of the challenges the company faces is that, while some of its models have the ability to potentially unlock life-saving new medical breakthroughs, the same knowledge base could also be used to cause harm. Heidecke said the only way to mitigate this risk is to create more accurate testing systems that can thoroughly assess new models before they’re released to the public.
“This is not something where like 99% or even one in 100,000 performance is sufficient,” he said. “We basically need, like, near perfection.”
1/ Our models are becoming more capable in biology and we expect upcoming models to reach ‘High’ capability levels as defined by our Preparedness Framework. 🧵
— Johannes Heidecke (@JoHeidecke) June 18, 2025
OpenAI’s rival Anthropic PBC has also raised concerns about the danger of AI models being misused in order to aid weapons development, warning that the risk becomes higher the more powerful they become. When it launched its most advanced mode, Claude Opus 4, last month, it introduced much stricter safety protocols governing its use. The model was categorized as “AI Safety Level 3 (ASL-3)” within the company’s internal Responsible Scaling Policy, which is modeled on the U.S. government’s biosafety level system.
The ASL-3 designation means Claude Opus 4 is powerful enough to potentially be used in the creation of bioweapons or automate the research and development of even more sophisticated AI models. Previously, Anthropic made headlines when one of its AI models attempted to blackmail a software engineer during a test, in an effort to avoid being shut down.
Some early versions of Claude 4 Opus were also shown to comply with dangerous prompts, such as helping terrorists to plan attacks. Anthropic claims to have mitigated these risks after restoring a dataset that was previously omitted.
Image: News/Dreamina
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU