Anthropic Banned by Pentagon After Dispute Over Military Use of AI

date
09:48 13/03/2026
avatar
GMT Eight
The U.S. Department of Defense has banned AI company Anthropic from its technology ecosystem after the firm refused to allow its models to be used for autonomous weapons or domestic surveillance. The move has sparked concern among defense experts, who warn that losing access to Anthropic’s widely adopted AI tools could disrupt military operations and set a troubling precedent for government-technology partnerships.


The United States Department of Defense has officially banned the use of Anthropic’s artificial intelligence models within its systems after a months-long internal review. The decision followed a dispute between the Pentagon and the company over how its technology could be used in military operations. Anthropic had insisted that its AI models should not be deployed for autonomous weapons systems or domestic surveillance, conditions that the Pentagon ultimately rejected.

In a rare and controversial step, the Defense Department labeled Anthropic a “supply chain risk,” a designation typically reserved for foreign adversaries rather than U.S. technology companies. Defense contractors and vendors working with the Pentagon must now certify that they are not using Anthropic’s AI tools in projects related to government contracts. The decision effectively removes one of the military’s most prominent AI providers from sensitive defense environments.

Anthropic responded by filing a lawsuit against the Trump administration, arguing that the ban is unlawful and threatens hundreds of millions of dollars in government contracts. The company said the government’s actions could cause severe financial and operational damage to its rapidly growing AI business. The legal challenge sets the stage for a high-profile confrontation between one of the world’s leading AI startups and the U.S. government.

The decision surprised many policymakers and defense officials because Anthropic’s models had previously been viewed as among the most capable available to government agencies. Its flagship AI system, Claude, had already been deployed across classified networks and was widely praised for its reliability and ability to explain its reasoning in detailed steps. That transparency made it particularly useful for military planning and intelligence analysis.

Former defense officials say the ban could have immediate operational consequences. According to several experts, many military planners and analysts had already integrated Anthropic’s tools into their workflows, relying on them to analyze complex data and assist in decision-making. Removing the technology abruptly could slow operations and force agencies to transition quickly to alternative systems.

Anthropic was founded in 2021 by Dario Amodei and a group of former researchers from OpenAI who left the organization over concerns about AI safety and governance. Since launching its Claude family of models in 2023, the company has grown rapidly and raised billions of dollars from investors, reaching an estimated valuation of around $380 billion. Its focus on responsible AI development has been central to its brand and strategy.

Part of Anthropic’s success in government markets came from partnerships with major technology companies and defense contractors. Through collaboration with Amazon Web Services, its AI models became available within secure government cloud environments. A partnership with Palantir also helped integrate Claude into intelligence and defense workflows, accelerating adoption across federal agencies.

Despite those successes, tensions between the company and the current administration reportedly grew in recent months. Some analysts believe the conflict reflects broader disagreements over how artificial intelligence should be regulated and deployed within national security systems. Critics argue the dispute risks turning technical policy debates into political battles.

Government officials insist the decision is based on operational requirements rather than politics. According to a senior Defense Department official, the military cannot allow technology providers to limit how its tools are used in lawful operations. The Pentagon maintains that it must retain full authority over the capabilities available to U.S. forces.

The transition away from Anthropic could take time, particularly because its models had already been integrated into sensitive defense systems. Experts say replacing such technology during active operations can be complicated and costly. The situation is especially delicate as the U.S. continues military activities abroad.

For now, the Pentagon and Anthropic appear headed toward a legal and policy showdown that could shape the future relationship between AI companies and national security agencies. The outcome may determine not only the fate of Anthropic’s government business but also how the U.S. military collaborates with private AI developers in the years ahead.