With new chatbot safety controversies cropping up regularly, AI start-up Anthropic has updated the usage policy of its Claude chatbot to clamp down on one potentially disastrous use case.
The chatbot now forbids using it to “synthesize, or otherwise develop, high-yield explosives or biological, chemical, radiological, or nuclear weapons or their precursors.” Though its terms and conditions had previously contained a clause forbidding the design of “weapons, explosives, dangerous materials or other systems designed to cause harm,” it’s the first time they include this level of granular detail, The Verge points out.
In contrast, Claude has loosened its restrictions in some other areas. The chatbot has backtracked on its blanket ban on generating all types of lobbying or campaign content, reducing its restrictions to only “prohibit use cases that are deceptive or disruptive to democratic processes, or involve voter and campaign targeting.” It said the move was in the interests of enabling “legitimate political discourse.”
In addition, Claude also added new terms to stop its tools from being used for cyberattacks or to create malware.
Though there are no reported real-world examples of terrorists using publicly released chatbots to build a biological, chemical, radiological, or nuclear weapon, plenty of research has highlighted how large language models (LLMs) could potentially be used for these ends.
In April 2025, security researchers at HiddenLayer alleged it was possible to bypass safeguards in mainstream LLMs from OpenAI, Anthropic, Meta, and Google to produce guides on enriching uranium (which is a key part of building a nuclear weapon). Though the chatbots didn’t provide information not already available on the internet, it was presented in a readable format that may be easier to understand for individuals without the necessary technical background.
Meanwhile, a 2024 academic paper involving researchers from Northwestern and Stanford, reported by Time Magazine, found that even though today’s AI models probably do not “substantially contribute” to biological risk, future systems could help to engineer new pandemic-causing pathogens.
Recommended by Our Editors
We’ve also seen foreign powers like China allegedly use chatbots for offensive purposes, albeit indirectly, like asking ChatGPT to write and translate political propaganda internationally.
5 Ways to Get More Out of Your ChatGPT Conversations
Like what you’re reading? Don’t miss out on our latest stories. Add PCMag as a preferred source on Google.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
About Will McCurdy
Contributor
