OpenAI CEO Sam Altman has announced a “agreement” with the United States Department of Defense (DOD) to implement the company’s AI models in the “classified network” of the department. This comes after President Trump ordered federal agencies to begin phasing out the use of Anthropic’s AI technology. This decision, which marks a change in AI and Defense strategy, has heightened tensions between the White House and parts of the industry.
The announcement places OpenAI at the center of a growing dispute on how artificial intelligence should be used in the US military, particularly as the Trump administration moves to restrict federal use of rival AI systems. A broader debate about AI and Defense, or the role of AI in national security, is at the heart of the conflict, including concerns about its use in cases of lethal force and in relation to sensitive government data and surveillance.
AI and Defense with the “Trump wars” in the background
Last Friday, Trump harshly attacked Anthropic in a post on Truth Social: “the unhinged leftists at Anthropic have made a DISASTROUS MISTAKE by attempting to FORCE the War Department and force them to obey their Terms of Service instead of our Constitution. “Their selfishness is putting AMERICAN LIVES, our troops, and our national security at risk.”.
The change of supplier was a foregone conclusion. What until recently seemed like a stable relationship between AI companies and the federal administration has entered an unexpectedly turbulent phase. The inclusion of Anthropic in the list of “risk to the supply chain” It doesn’t come out of nowhere. According to the official version, the decision comes after the company refused to accept that its Claude model could be used by the Department of Defense in any application considered legal by the Government, including scenarios linked to advanced autonomous systems and sensitive operations.
A dead king, a king in place: OpenAI will make cash
Altman said OpenAI has reached a according to the Department of Defense to deploy its models on the department’s classified network, and described the partnership as based on shared security principles. The agreement includes bans on domestic mass surveillance, maintains human accountability for the use of force, and incorporates technical safeguards, cloud-only deployment, and front-deployed engineers.
The CEO of OpenAI wants the same terms offered to other AI companies and said the firm hopes tensions over the use of AI in government can be reduced through negotiated agreements rather than legal action. Altman framed the agreement as a security-focused partnership and stated that the terms were designed to align with existing laws and policies governing military use of emerging technologies:
«Tonight we reached an agreement with the Department of War (formerly Department of Defense) to deploy our models on their classified network. In all of our interactions, the War Department demonstrated a deep respect for security and a desire to collaborate to achieve the best possible outcome. AI safety and broad benefit sharing are at the heart of our mission. Two of our most important security principles are the prohibition of domestic mass surveillance and human responsibility in the use of force, including autonomous weapons systems.explained the CEO of OpenAI.
The agreement sets the stage for OpenAI to begin deploying its models within the Department of Defense’s classified network under outlined security terms. Which has not worked for Anthropic. LThe decision comes amid escalation in the Middle East for the attacks by the United States and Israel on Iran.
Las “Trump’s wars” claim new victims, also in AI and Defense, while hundreds of OpenAI and Google employees signed an open letter this week asking their companies to support Anthropic’s position.
