US Military Was Using Anthropic Models as It Bombed Iran, Despite Trump’s Ban Order

The United States Central Command in the Middle East reportedly used Anthropic’s Claude AI for intelligence assessments, including target identification and simulation of combat scenarios

US Military Was Using Anthropic Models as It Bombed Iran, Despite Trump’s Ban Order
info_icon
Summary
Summary of this article
  • US military was using Anthropic’s Claude AI during its Iran airstrikes despite Donald Trump’s ban call hours earlier.

  • US Central Command used the tool for intelligence, target identification and combat simulations, a WSJ report claimed.

  • The Pentagon had also deployed Claude in the operation to capture Nicolás Maduro.

The US military was still using San Francisco-based startup Anthropic’s artificial intelligence models in its operations as it launched airstrikes on Iranian cities, reported The Wall Street Journal. Just a few hours before the attack, President Donald Trump had called for a ban on the company across federal government branches.

According to the report, citing sources, the United States Central Command in the Middle East used Anthropic’s Claude AI for intelligence assessments, including target identification and simulation of combat scenarios. The publication in a separate report earlier had also suggested the United States Department of Defense deployed the model during its operation to capture Venezuelan President Nicolás Maduro.

Start-up Outperformers 2026

3 February 2026

Get the latest issue of Outlook Business

amazon

Its use in such sensitive missions is believed to have influenced the administration’s decision to phase out the startup’s technology within six months. The move follows months of disagreements between Anthropic and the United States Department of Defense (now renamed the Department of War) over usage terms, prompting the Pentagon to seek alternative AI tools as it works to replace Claude across military systems.

The standoff follows months of friction between Anthropic and the Department of Defense over the terms governing military use of its AI models. Anthropic was previously among a small group of AI developers, alongside Google and others, awarded Pentagon contracts worth up to $200 million to deliver advanced AI capabilities. Its Claude model was cleared for classified military and intelligence tasks through partnerships with Palantir Technologies and Amazon Web Services.

The model’s reported use in a January 2026 US operation that led to the capture of Nicolás Maduro brought renewed scrutiny to its operational role in high-security missions.

Tensions started after US Secretary of War Pete Hegseth gave Anthropic a deadline to allow unrestricted use of its AI tools for any lawful military purpose. CEO Dario Amodei rejected the demand, saying the company would not remove safeguards designed to prevent uses such as autonomous weapons or mass surveillance, even if it meant losing government contracts.

In a post on Truth Social on February 27, Donald Trump criticised Anthropic as “leftwing” and “woke”, alleging that its actions were putting American lives, troops and national security at risk. He directed all federal agencies to immediately stop using the company’s technology, adding that there would be a six-month phase-out period for departments currently deploying Anthropic’s products.

The company said it permitted defence use of its technology with two exceptions, mass domestic surveillance of Americans and fully autonomous weapons, and has challenged its designation as a defence supply chain risk, calling it an unprecedented move for a US firm and saying it plans to contest the decision in court.

The conflict has prompted the Defense Department to secure alternative contracts for AI tools from other developers. The Pentagon has reached deals with the makers of OpenAI’s ChatGPT and xAI models for classified settings, but military officials and AI experts say fully replacing Claude across all systems could take months.

On Saturday, OpenAI co-founder Sam Altman announced that they had reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, adding that the agreement has more guardrails than any previous deal for classified AI deployments, including Anthropic’s.

“The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decision-maker under the same authorities,” the contract said, according to the company.

According to the ChatGPT maker, it has set three firm red lines in its work with the US Department of Defense: its technology cannot be used for mass domestic surveillance, to control autonomous weapons systems, or to make high-stakes automated decisions such as social credit scoring. The company said that unlike some other AI labs that rely mainly on usage policies, it applies a broader safety framework that includes technical guardrails, cloud-based deployment, oversight by cleared personnel and strict contractual protections, in addition to existing US laws.

Published At:

Advertisement

Advertisement

Advertisement

Advertisement

×