Anthropic clashes with US Department of Defence over military use of Claude.
Pentagon demands “all lawful use”; Anthropic refuses to drop safety guardrails.
Claude reportedly used in classified raid, heightening tensions.
Standoff highlights moral stakes of AI in warfare amid rising global arms race.
There is a new drama unfolding in the US, but this time not in the Department of Homeland Security or the Department of Commerce. It is within the corridors of the US Department of Defence.
At the heart of this drama lies a philosophical and strategic clash: who should control how artificial intelligence (AI) is used in war — and on what terms?
Anthropic, a San Francisco-based AI start-up founded by ex-OpenAI researchers, best known for its large language model Claude, found itself at the centre of it. Designed with safety in mind, Claude has been touted as one of the most advanced and responsible AI tools on the market. In mid-2025, Anthropic won a significant contract — around US $200mn — to integrate Claude into Pentagon systems, including classified networks.
It was supposed to be the start of a strategic partnership. Instead, it has became a case study for a crisis in recent times.
Red Lines
Anthropic CEO Dario Amodei never hesitated to express his concerns about the potential dangers of AI and has centred the company's brand around safety and transparency. He advocates for "sensible AI regulation".
The company’s internal policies prohibit using Claude for certain military applications: most notably, fully autonomous weapons that can select and engage targets without humans in the loop, and mass domestic surveillance of citizens. These restrictions are rooted in ethical concerns — the fear that powerful AI could be misused in ways that undermine human control or civil liberties.
In an essay released last month, Amodei wrote, "Democracies normally have safeguards that prevent their military and intelligence apparatus from being turned inwards against their own population, but because AI tools require so few people to operate, there is potential for them to circumvent these safeguards and the norms that support them. It is also worth noting that some of these safeguards are already gradually eroding in some democracies."
However, the Pentagon doesn’t see it that way. Defence officials — led by Defence Secretary Pete Hegseth — argue that these corporate guardrails are too restrictive for “all lawful use” by the military. In their view, if something is legal under U.S. law and useful in a conflict, a vendor’s ethical qualms shouldn’t limit how the armed forces employ the technology.
The Escalation
The tension came to a head in recent weeks. The Wall Street Journal reported that Claude had been used by US forces in a classified operation — notably the raid that captured Venezuelan President Nicolás Maduro earlier this year — without detailed prior discussion with Anthropic.
The US raid on Venezuela involved bombing across the capital, Caracas, and the killing of 83 people, according to Venezuela’s defence ministry. Anthropic’s terms of use prohibit the use of Claude for violent ends, for the development of weapons or for conducting surveillance.
Anthropic was the first AI developer known to be used in a classified operation by the US department of defence. That episode is now cited on both sides as part of the problem: the Pentagon sees responsible deployment; Anthropic sees lack of dialogue on sensitive uses.
In late February 2026, Hegseth summoned the Anthropic CEO to the Pentagon. According to US defence sources told Axios, the message was blunt: agree to let the military use Claude without the company’s usage restrictions — or face consequences.
The threats on the table are serious. The Pentagon has warned it may terminate Anthropic’s military contracts if it doesn’t comply, could label the company a “supply-chain risk” — a designation usually reserved for foreign adversaries — or even invoke the Defence Production Act to compel Anthropic to allow unrestricted use of its technology.
Anthropic’s stance reflects a broader conversation within tech about whether companies should limit how their creations are used, especially when lethal decisions are at stake. The company insists that human oversight and precautionary guardrails are essential.
The Pentagon, for its part, is focused on an accelerating global AI arms race. Officials cite competition with adversaries and the need to exploit AI across all lawful military contexts. In their telling, restrictive usage policies could hinder national defence and put soldiers at a disadvantage.
The conflict has had immediate strategic consequences. While Anthropic digs in, other AI firms — including Elon Musk’s xAI with its Grok model — have agreed to the Pentagon’s “all lawful use” framework, potentially positioning themselves as preferred defence partners.
Whatever the outcome in the coming days, this standoff has already underscored a deeper truth: when cutting-edge technology collides with national defence, the battle over AI isn’t just technical, legal or strategic — it is profoundly moral too, and the choices made now will reverberate well beyond the Pentagon’s walls.
























