Microsoft Backs Anthropic in Court Battle Against Pentagon's National Security Label

When talks collapsed, Anthropic walked away from a $200 million Pentagon contract. OpenAI, its biggest rival, quickly stepped in to take its place

Microsoft Backs Anthropic in Court Battle Against Pentagon's National Security Label
info_icon
Summary
Summary of this article
  • Microsoft has backed Anthropic in court, urging a US judge to temporarily block the Pentagon’s decision to label the AI firm a “national security supply-chain risk.”

  • Anthropic is suing the Pentagon, arguing the designation is unlawful and retaliatory.

  • The case could reshape government-AI partnerships, as the blacklist may disrupt contractors.

Microsoft has stepped into one of the most consequential legal battles in AI sector. The tech giant has filed a court document in support of Anthropic's lawsuit against the US Department of Defense.

The filing, made on Tuesday in a federal court in San Francisco, backs Anthropic's request to temporarily block the Pentagon from enforcing the "supply chain risk" label it slapped on the AI company.

Geopolitics Shackles Green Switch

2 March 2026

Get the latest issue of Outlook Business

amazon

The dispute has its roots in a breakdown of negotiations between Anthropic and the Defense Department over how the company's AI model, Claude, could be used by the US military.

The sticking point was that Anthropic refused to remove safety guardrails built into Claude that prevented it from being used in autonomous weapons systems or mass surveillance operations.

When talks collapsed, Anthropic walked away from a $200 million Pentagon contract. OpenAI, its biggest rival, quickly stepped in to take its place.

Shortly after, Defense Secretary Pete Hegseth designated Anthropic a national security supply chain risk, a label typically reserved for foreign adversaries, not American technology companies.

The move effectively placed Anthropic in the same category as Chinese tech giant Huawei, a company the US government has spent years trying to shut out of global technology supply chains. For Anthropic, that comparison was the final straw.

Anthropic Sues Pentagon

On Monday, Anthropic filed a lawsuit to block the Pentagon's designation, arguing it was not only legally baseless but a deliberate act of retaliation for refusing to compromise on its AI safety principles.

The company warned that if left unchallenged, the move could set a dangerous precedent, effectively punishing any technology company that dares to push back against government demands.

Why Microsoft Got Involved

Microsoft's intervention came in the form of an amicus brief, a legal filing submitted by a party not directly involved in a case but with relevant expertise or a stake in its outcome, Reuters reported. The company has both a financial and operational interest in seeing Anthropic prevail.

On the financial side, Microsoft announced plans in November to invest up to $5 billion in Anthropic, making it one of the AI startup's most significant backers. It has also been a major investor in OpenAI since 2019.

On the operational side, Microsoft integrates Anthropic's products into technology it supplies directly to the US military, meaning the Pentagon's designation hits Microsoft's own business.

In its filing, Microsoft argued that the temporary block was urgently needed to prevent costly disruptions for contractors who rely on Anthropic's technology.

It also pointed out an inconsistency in the Pentagon's order; while the Defense Department gave itself six months to phase out Anthropic's products internally, it offered no such transition period to outside contractors like Microsoft, who use Anthropic's tools to deliver services to the military.

"Should this action proceed without the entry of a temporary restraining order, Microsoft and other government contractors  with expertise ⁠in developing solutions to support U.S. government missions will be forced to account for a new risk in their business planning," the company said as quoted by Reuters.

Microsoft further argued that a temporary pause would create space for a negotiated solution, one that protects the military's access to cutting-edge AI while ensuring it is not used for domestic mass surveillance or to trigger a war without meaningful human oversight.

Broader Support

Microsoft is not alone in backing Anthropic. On Monday, a group of 37 researchers and engineers from OpenAI and Google also filed an amicus brief in support of Anthropic, a rare show of solidarity across competing AI firms.

The judge overseeing the case must first approve Microsoft's request to formally enter its brief, though courts routinely allow outside parties to weigh in on cases of significant public interest.

Published At:

Advertisement

Advertisement

Advertisement

Advertisement

×