A brief filed by Microsoft in Anthropic’s lawsuit against the U.S. Department of War shows the deepening ties between the two companies, and Microsoft’s willingness to take on the federal government at key moments in its history.
Microsoft on Tuesday urged a federal judge in San Francisco to temporarily block the Pentagon’s designation of Anthropic as a supply chain risk, arguing that immediate enforcement would hurt Microsoft and other government contractors that depend on Anthropic’s technology.
The government’s designation imposes “substantial and wide-ranging costs and risks” on companies that use Anthropic’s models “as a foundational layer of their own products and services, which they provide to the U.S. military,” Microsoft said in the filing.
The New York Times DealBook called Microsoft’s brief “a remarkable act” and “a momentous decision” for a company that is one of the largest government contractors in America, noting that it stands out in a period when corporate America’s unwritten rule has been to avoid picking fights with the White House.
It came a day after Microsoft launched Copilot Cowork, a new AI product built on Anthropic’s Claude models, and four months after Microsoft committed to invest up to $5 billion in the startup in a deal that includes Anthropic spending at least $30 billion on Microsoft Azure.
Amazon, which has invested $8 billion in Anthropic, has not publicly weighed in on the lawsuit or the supply chain risk designation. We’ve contacted the company for comment.
Microsoft hasn’t shied away from fighting with Washington, D.C., at key moments in its history, ranging from its landmark antitrust battle with the Justice Department in the late 1990s to its Supreme Court fight against the Trump administration over DACA immigration protections.
The Redmond-based company has built one of the deepest government-relations operations in tech, led by President and Vice Chair Brad Smith, a former D.C. lawyer whom the New York Times once called “a de facto ambassador for the technology industry at large.”
Anthropic sued the Department of War on Monday over the designation, which is historically reserved for foreign adversaries. It followed the collapse of contract negotiations in which Anthropic refused to drop two guardrails on its AI models: no use for fully autonomous weapons and no use for mass domestic surveillance of Americans.
President Trump separately directed all federal agencies to stop using Anthropic’s technology.
OpenAI, meanwhile, moved quickly to fill the gap left by Anthropic, announcing its own Pentagon deal on the same day the designation came down. CEO Sam Altman later acknowledged the timing looked “opportunistic and sloppy.” Thirty-seven engineers and researchers from OpenAI and Google, including Google chief scientist Jeff Dean, separately filed their own amicus brief in support of Anthropic.
In its amicus brief, Microsoft said AI should not be used “to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war,” aligning itself with Anthropic’s position on the two sticking points in the negotiations.
Microsoft also flagged a double-standard in the government approach: the Pentagon gave itself six months to transition off Anthropic’s models but made the designation effective immediately for contractors. Without a restraining order, Microsoft warned, it and other companies would have to “act immediately to alter existing product and contract configurations” for the military.
