U.S. President Donald Trump today ordered federal agencies to stop using technology from Anthropic PBC.
The ban will also have to be upheld by military suppliers, U.S. Defense Secretary Pete Hegseth announced separately on X. Both moves are related to disagreements over the safety guardrails that Anthropic ships with its models.
Trump wrote on Truth Social that “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.” Hegseth’s X post, meanwhile, states that the Defense Department has designated the company as a supply chain risk to national security. That means U.S. military contractors, suppliers and partners may not conduct “any commercial activity” with Anthropic.
The saga began last June when the company won a $200 million artificial intelligence contract from the Pentagon. At the time, Anthropic stated that it would collaborate with defense officials to explore national security use cases for AI. Rival OpenAI Group PBC won a similar contract around the same time.
Last month, Reuters reported that Anthropic and the Pentagon had clashed over Claude’s safety guardrails. Anthropic doesn’t permit its AI to be used for mass surveillance of Americans or the development of autonomous weapons. The Pentagon took issue with the company’s policy. Officials reportedly demanded that Anthropic make Claude available for “all lawful purposes.”
Earlier this week, the Pentagon stated that it had offered concessions to resolve the matter. Officials invited Anthropic to the Defense Department’s ethics board and proposed taking certain other steps. The AI provider didn’t accept the offer, stating that “new language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.”
The disagreement came to a head on Tuesday when Hegseth summoned Anthropic Chief Executive Dario Amodei to a meeting. According to Axios, Hegseth gave the company until today to revise its policy. He reportedly stated that the Defense Department could designate Anthropic as a supply chain risk or order it to adapt its models’ guardrails under the Defense Production Act.
Now that the Pentagon has opted for the former option, Anthropic’s models will be phased out of government networks within six months, if that’s possible. In a blog post published ahead of the move, Amodei wrote that “we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.”
According to CNBC, OpenAI Group PBC CEO Sam Altman told employees that the company applies the same “red lines” to its models as Anthropic. The ChatGPT developer is reportedly in talks to bring its models to classified networks. Earlier, the Pentagon approved xAI Holdings Corp.’s Grok for use in classified networks despite safety concerns from officials at multiple federal agencies, but it’s widely considered to trail OpenAI, Anthropic and others in its AI capabilities.
Photo: Wikimedia Commons
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
