Anthropic says the US government needs classified communication channels with AI companies.
The recommendation is one of many in a 10-page document that Anthropic submitted to the US Office of Science and Technology Policy (OSTP) in response to the Trump administration’s call for public comment on its AI action plan.
From Anthropic’s perspective, the US needs to prepare for powerful AI systems capable of “matching or exceeding” the intellectual capacity of Nobel Prize winners, which could arrive as soon as 2026 or 2027. It points to the progress of its latest model, the Pokémon-playing Claude 3.7 Sonnet, as proof of how fast the tech is evolving.
Anthropic CEO Dario Amodei (Credit: Chesnot / Contributor / Getty Images Entertainment via Getty Images)
“Classified communication channels between AI labs and intelligence agencies” could help the US combat national security threats, along with “expedited security clearances for industry professionals” and a new set of security standards for AI infrastructure.
But will that leave the public in the dark about critical decisions, as many jobs and industries are grappling with the effects of AI? Soon, AIs will be able to do jobs that “highly capable” humans can do today, including navigating digital interfaces and “interfacing with the physical world” by controlling lab equipment and manufacturing tools. Anthropic says this could lead to “potential large-scale changes to the economy.” To monitor these changes, it recommends “modernizing economic data collection, like the Census Bureau’s surveys.”
President Trump reversed the Biden administration’s executive order on AI and replaced it with one titled “Removing Barriers to American Leadership in Artificial Intelligence.” Although the new administration is expected to take a relatively hands-off approach to AI regulation, Anthropic says the government needs to stay involved. It should track the development of AI systems, create “standard assessment frameworks,” and accelerate its own adoption of AI tools, which is one stated goal of Elon Musk’s Department of Government Efficiency (DOGE).
Anthropic also calls for building more AI infrastructure, such as the $500 billion Stargate project, and further restricting semiconductor exports to adversaries. “We believe the United States must take decisive action to maintain technological leadership,” Anthropic says.
Recommended by Our Editors
In the past, Anthropic CEO Dario Amodei has supported government regulations for potentially threatening AI systems. The company wrote a lengthy letter in support of California’s AI safety bill, citing the “importance of averting catastrophic misuse” of the technology. Governor Gavin Newsom ultimately vetoed the bill over concern that it only targeted large tech companies and ignored the threats presented by smaller ones.
The comment period for the Trump administration’s AI action plan ends on March 15.
Get Our Best Stories!
This newsletter may contain advertising, deals, or affiliate links.
By clicking the button, you confirm you are 16+ and agree to our
Terms of Use and
Privacy Policy.
You may unsubscribe from the newsletters at any time.
About Emily Forlini
Senior Reporter
