The Department of Defense (DoD) orchestrated a major maneuver. On May 1, 2026, it announced that it had formed strategic partnerships with a collection of technology giants, including OpenAI, Google or Encore Microsoft.
It seeks to transform the army into an “AI-oriented combat force” and guarantee decision-making superiority in all theaters of operations. However, behind this announcement lies a spectacular exclusionthat ofAnthropica player until then central and whose AI Claude was the only one available on the Pentagon’s classified network.
This public divorce finds its source in a deep disagreement on the limits to be imposed on the military use of artificial intelligence.
If the Trump administration has declared Anthropic persona non grata, the situation is however different with Mythos, its new cybersecurity tool so powerful that discussions have resumed.
Why is Anthropic excluded from the Pentagon’s AI deal?
Anthropic was sidelined because of its intransigence on “security safeguards” governing the use of its AI models. The startup insisted that the Pentagon prohibits the use of its technologies for certain applications, including autonomous weapons and mass surveillance.
A categorical refusal to give a blank check for “all lawful uses,” as required by the Trump administration.
This divergence led to escalation. The DoD referred to Anthropic as “ supply chain risk ”, a label usually reserved for companies linked to foreign adversaries.
This decision had the effect of banning Anthropic from government contracts, pushing the company to sue to challenge what she considers to be an abuse of power. The courts have suspended the sanction for the moment, but the damage is done.
What is the Mythos tool and how is it a game changer?
Mythos is the latest addition to Anthropic laboratories, a model ofArtificial intelligence with unprecedented cyber offense and cyber defense capabilities. It’s a double-edged sword.
On the one hand, it can identify the vulnerabilities of a network with formidable efficiency. On the other hand, it can provide complete roadmap to a hacker to exploit it. A tool so powerful that it has been called ” full-fledged national security moment » by the DoD itself.

The release of Mythos completely reshuffled the cards. The Pentagon, although having officially banned Anthropic, cannot ignore such critical technology.
In fact, the administration finds itself forced to negotiate with a company that it has publicly ostracized. Anthropic’s CEO was received at the White House, and President Trump even conceded that an agreement was “possible”, recognizing the potential usefulness of a “very intelligent” company.
What future for the collaboration between Anthropic and the American army?
The future of this relationship is a real strategic fog. On the one hand, the official ban is still in force and Anthropic’s competitors are taking advantage of this to establish themselves permanently.
On the other hand, the urgency posed by Mythos and the recognized superiority of Anthropic’s models create a immense pressure to find common ground. Agencies like the NSA are already using Mythos despite the official blockade, illustrating the paradox of the situation.
The discussions behind the scenes are intense. The DoD says it still wants safeguards, but says it is ready to “negotiate” them. The whole issue is whether Anthropic will give in on its ethical principles in the face of financial pressure or whether Pentagon will accept unprecedented constraints to access cutting-edge technology in terms of cybersecurity.
