Microsoft Backs Anthropic in Legal Fight Over Pentagon Blacklist
Microsoft has voiced support for artificial intelligence startup Anthropic in its escalating dispute with the United States Department of Defense, asking a federal judge to temporarily halt the Pentagon’s decision to blacklist the company. In a filing submitted to the U.S. District Court in San Francisco, Microsoft argued that a temporary restraining order would prevent immediate disruptions to technology currently used by the U.S. military.
The company warned that the Defense Department’s designation of Anthropic as a “supply chain risk” could force technology providers to rapidly reconfigure products and contracts tied to government systems. According to Microsoft, such abrupt changes could create operational challenges and potentially affect the military’s access to advanced AI tools. The filing states that delaying the ban would allow a more orderly transition while negotiations continue.
Anthropic filed a lawsuit earlier this week challenging the Pentagon’s decision, describing the move as unlawful and damaging to its business. The company argues that the blacklist jeopardizes hundreds of millions of dollars in government contracts and could have long-term consequences for its relationships with federal agencies. The lawsuit marks one of the most significant legal clashes yet between a major AI developer and the U.S. government.
The Pentagon banned Anthropic’s technology last week after negotiations between the two sides broke down. At the center of the dispute were restrictions Anthropic wanted to impose on how its AI systems could be used by the military. The company sought guarantees that its models would not be deployed in fully autonomous weapons or for large-scale domestic surveillance.
Defense officials rejected those limitations, insisting that the military must retain full access to critical technology for any lawful purpose. When the parties failed to reach an agreement, the Defense Department designated Anthropic a supply chain risk—a classification rarely applied to domestic technology companies and historically associated with foreign security threats.
Microsoft’s legal filing comes as an “amicus brief,” meaning the company is not directly involved in the lawsuit but believes the outcome could significantly affect its operations. The tech giant has substantial financial and strategic ties to Anthropic, having announced plans last year to invest up to $5 billion in the company.
The dispute also highlights the broader rivalry and partnerships shaping the AI industry. While Microsoft has long been a major investor in OpenAI, it has also deepened its relationship with Anthropic as competition intensifies among leading AI developers.
Following the Pentagon’s decision, cloud providers including Microsoft, Amazon, and Google reassured customers that Anthropic’s products would remain available on their platforms for commercial use outside of defense contracts. The restrictions apply specifically to Pentagon-related work.
Microsoft said the temporary restraining order would provide time for both sides to negotiate a compromise that preserves access to advanced technology while addressing ethical concerns about AI deployment. The company emphasized that both the government and technology industry share the goal of ensuring responsible use of artificial intelligence.
Founded in 2021 by former OpenAI researchers, Anthropic has quickly grown into one of the most valuable AI startups in the United States. The company’s flagship models, known as Claude, have gained traction across enterprise and government sectors, helping drive the firm’s valuation to around $380 billion.
The outcome of the legal fight could have significant implications for how the U.S. government works with private AI companies, particularly as artificial intelligence becomes increasingly central to national security and defense operations.











