The Pentagon has reached agreements with seven AI providers, the US Department of Defense announced today. The companies include SpaceX, OpenAI, Google, Nvidia, Microsoft, Amazon Web Services and the start-up Reflection. They should deploy their AI systems in the IL6 and IL7 classified network environments. This is no longer just about experimental tests, but about the use of advanced AI capabilities in highly sensitive military environments.
Read more after the ad
The step is part of the Pentagon’s AI strategy and is intended to enable new capabilities in operational control, intelligence gathering and administration. “These agreements accelerate the transformation of the U.S. military toward AI-powered forces and strengthen its ability to maintain decision-making superiority in all areas of warfare,” the Pentagon statement said.
As evidence of the expansion, the US Department of Defense points to GenAI.mil, the Pentagon’s official AI platform. More than 1.3 million employees used it in the first five months, generating tens of millions of prompts and deploying hundreds of thousands of agents.
Silicon Valley is moving closer to the Pentagon
While some companies such as SpaceX and OpenAI have previously entered into similar agreements with the Pentagon, the latest deals are an important step toward integrating AI tools into the Defense Department’s operational practices, according to the Wall Street Journal. The new agreements also showed how much parts of Silicon Valley are willing to accept the terms of the Defense Department.
According to the newspaper, many of the companies involved point to promises that their tools may not be used for mass surveillance or autonomous weapon systems. For its part, the Pentagon emphasizes not carrying out such unlawful activities and says companies should trust that the US military is using AI responsibly.
Anthropic is missing, but Mythos remains relevant
Read more after the ad
One major AI provider missing from the list is Anthropic. The Pentagon classified the company as a supply chain risk and blacklisted it in March after Anthropic insisted on restrictions on the use of its Claude models in contract negotiations. Among other things, the company wanted to contractually exclude mass surveillance of US citizens and use in autonomous weapon systems. The dispute is now being played out in court: an appeals court in Washington DC recently rejected Anthropic’s request to temporarily suspend the classification as a supply chain risk. However, the case has not been finally decided.
At the same time, the new cybersecurity model Mythos has apparently softened the fronts somewhat. Despite the sanctions, the US government resumed talks with Anthropic because Mythos is considered too important in terms of security policy. Emil Michael, head of technology at the US Department of Defense, told CNBC on Friday that Anthropic is still considered a supply chain risk. However, myth is a “separate national security moment.” He justified this with the model’s special abilities to find and close cyber vulnerabilities. Dealing with myths is therefore being examined not only in the Ministry of Defense, but across the government.
Despite the renewed talks, tensions remain: Anthropic’s plans to expand access to Mythos to about 70 other companies have met with resistance in the White House.
(tobe)
