On Monday, Defense Secretary Pete Hegseth announced that Grok, the artificial intelligence chatbot developed by Elon Musk, will soon operate within Pentagon networks. The system will work alongside Google’s generative AI tools and reach both classified and unclassified military systems.
“Soon we will have the world’s leading AI models on every unclassified and classified network in our department,” Hegseth said, speaking at SpaceX headquarters in Texas, according to Associated press.
The rollout, expected later this month, marks a sharp acceleration in the way the US military plans to use AI. It also comes at a time when Grok is under intense scrutiny abroad for generating sexualized and violent images.
A mashup of big words
Hegseth framed the move as part of a new “AI Acceleration Strategy” at the War Department (formerly the Department of Defense), which he said would “unleash experimentation, remove bureaucratic barriers, focus on investments, and demonstrate the execution approach needed to ensure we lead in military AI and that it becomes more dominant in the future,” Hegseth said. The Guardian.
It’s all about the data. Hegseth said he has directed the Pentagon’s Chief Digital and Artificial Intelligence Office to make military data widely available for what he called “AI exploitation,” including information from intelligence systems.
“AI is only as good as the data it receives, and we’re going to make sure that’s there,” he said.
For years, defense officials have argued that the Pentagon sits atop one of the richest sources of data in the world, drawn from decades of logistics, surveillance and combat operations. The challenge has been to turn that information into actionable insight without compromising security or civil liberties.
In December, the Department of Defense selected Google’s Gemini model to power its internal AI platform GenAI.mil, which provides generative tools for military personnel. Grok will now join that ecosystem, alongside systems developed by Anthropic and OpenAI under contracts worth up to $200 million.
The approach reflects a broader shift. Previous governments encouraged the adoption of AI but attached strict restrictions, including bans on systems that could automate nuclear weapons or violate civil rights. It remains unclear whether these bans still apply under current leadership.
Hegseth suggested a looser stance. He said he would reject AI models “that don’t allow you to fight wars” and argued that military systems should operate “without ideological constraints that limit legal military applications.”
Why Grok?
The Pentagon’s confidence in Grok stands in stark contrast to the way the system is being received elsewhere.
Grok is embedded in Musk’s social media platform, X, and has drawn global criticism for allowing users to generate explicit and non-consensual deepfake images. In response, Indonesia and Malaysia temporarily blocked access to the chatbot, citing concerns over sexualized content.
In the United Kingdom, Ofcom, the country’s online security watchdog, has opened an investigation into X over Grok’s role in manipulating images of women and children.
Users and watchdogs have also pointed out anti-Semitic and racist messages generated by the chatbot. In one incident earlier this year, Grok referred to himself as “MechaHitler,” prompting widespread condemnation.
Musk has repeatedly described Grok as an alternative to what he calls “woke AI,” positioning it against rivals like Google’s Gemini and OpenAI’s ChatGPT. That framework has resonated with some political leaders, even as it worries regulators and community groups.
The Pentagon did not immediately respond to questions about how it plans to prevent similar abuses within military systems.
Still, defense officials argue that the risks of falling behind outweigh the dangers of acting quickly. Militaries around the world are racing to deploy AI for planning, logistics, intelligence analysis and cyber operations. China has made military AI a strategic priority as NATO allies experiment with their own systems.
The decision to bring Grok to the Pentagon underlines a growing divide. Civilian governments are tightening regulations around AI safety and consent, while defense institutions are advancing, driven by competition and the demands of warfare.
