OpenAI competitor Anthropic, which makes the Claude chatbot, is rolling out a new set of AI models built specifically for US national security use cases.
Governments using these types of large language models for national security is nothing new; both OpenAI and Anthropic’s tech is already used by the US government. Meanwhile, governments farther afield, including Israel, have allegedly used US-made AI models, directly or indirectly (though some of these claims have been denied).
Anthropic says the new models—which come as part of its government-focused offering Claude Gov—will provide benefits like improved handling of classified materials and a greater understanding of documents and information “within the intelligence and defense contexts.”
The Amazon-backed AI start-up also said that the new models will provide “enhanced proficiency” in languages and dialects that are important to US national security operations, though it didn’t specify those languages.
The models are also set to offer improved “understanding and interpretation” of cybersecurity data for intelligence analysts.
But if you’re reading this, there’s a good chance you won’t be able to try out a lot of these AI-for-spies models yourself. Anthropic says access to these models is limited to “those who operate in such classified environments.”
Recommended by Our Editors
Anthropic has been fairly public about its desire to build closer links to intelligence services. Just last month, the start-up submitted a 10-page document to the US Office of Science and Technology Policy (OSTP) arguing that “Classified communication channels between AI labs and intelligence agencies” could help the US deal with national security threats, while calling for “expedited security clearances for industry professionals.”
But Big AI, increasingly linking up with national security interests, has attracted some high-profile detractors. Famed whistleblower Edward Snowden dubbed OpenAI’s decision last year to appoint retired US Army General and NSA Director Paul Nakasone to a senior position within the company a “willful, calculated betrayal.”
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!