The US Department of Defense has awarded ChatGPT maker OpenAI a $200 million contract to develop “prototype frontier AI capabilities,” the government and company announced on Monday.
The deal is through the Defense Department’s chief digital and artificial intelligence office and is expected to be completed in one year. OpenAI said in its statement that its AI could help the department perform tasks ranging from “transform[ing] its administrative operations … to streamlining how they look at program and acquisition data, to supporting proactive cyber defense.”
That’s a broad list, from automating bureaucratic processes to potentially letting OpenAI’s tech play a major role in the digital systems that safeguard every American’s personal information. It could be just the first step in a more widespread adoption by government agencies.
The contract is a pilot program and the first partnership in the new OpenAI for Government initiative, through which the company aims to put its AI tools in the hands of “public servants across the United States.” Through the initiative, OpenAI says it’s offering access to AI “models within secure and compliant environments” and also, on a limited basis, new custom AI models for national security for federal, state and local governments.
This isn’t OpenAI’s first time dipping its toes into government operations. In January, the company launched ChatGPT Gov, a new pathway for government employees to access OpenAI’s models while still following the necessary security protocols. It also has partnerships with US National Labs, the Air Force Research Laboratory, NASA, the National Institutes of Health and the Treasury Department. Those will all be folded into OpenAI for Government.
This deal also builds on OpenAI’s other security work. Late last year, the company announced a partnership with Anduril, a defense contractor with a focus on AI and robotics/drones. Anduril’s statement explicitly points out OpenAI’s potential to “improve the nation’s defense systems that protect US and allied military personnel from attacks by unmanned drones and other aerial devices.” (Anduril also recently announced a new deal with Meta for VR/AR tech for the US Army.)
Many essential questions around AI, like those involving privacy and safety, are still unanswered. That takes on even greater significance as generative AI gets adopted in government operations that may involve things like sensitive personal information, legal status or law enforcement activity. That could put to the test OpenAI’s policies, which specify that its AI shouldn’t be used to compromise the privacy of real people, including to “create or expand facial recognition databases without consent” and “conduct real-time remote biometric identification in public spaces for law enforcement purposes.”
It’s not surprising to see OpenAI cozy up to the US government. Since its original ChatGPT model spurred the generative AI rush starting in late 2022, governments here and abroad have struggled with how to implement and regulate the new tech. It’s affected every branch of the US government. There hasn’t been any substantial federal regulation around AI — to the contrary, President Trump’s “big beautiful bill” on government spending making its way through Congress would prevent states from regulating AI themselves.
Some government departments, like the US Copyright Office, have laid out some guidelines for AI. Meanwhile in the courts, publishers and artists have filed lawsuits against AI companies alleging copyright infringement and misuse of training material. (Disclosure: Ziff Davis, ‘s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)