The US DoD labeled Anthropic a 'supply chain risk' for its AI policy preferences
DoD CTO Emil Michael claims these built-in safeguards could lead to ineffective military equipment
The designation requires defense contractors to certify they are not using Claude for Pentagon projects
US Department of Defense Chief Technology Officer Emil Michael on Thursday defended the government’s decision to label Anthropic a “supply chain risk,” arguing that AI systems whose policies conflict with government requirements could undermine military effectiveness.
Speaking on Squawk Box on CNBC, Michael said the designation stems from concerns that policy preferences embedded within an AI model could affect the performance or availability of technologies used by the military. “We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection,” he said.
Michael added that Anthropic’s business with the US government represents only a small portion of its overall revenue, noting that the company maintains a much larger commercial operation.
He also dismissed claims that the government has contacted private companies urging them to avoid working with Anthropic, calling such allegations “rumors” and stating that the department does not interfere with firms outside its own supply chain.
Anthropic vs Pentagon
The remarks come days after Anthropic, founded by CEO Dario Amodei, filed a lawsuit against the Pentagon, escalating a major dispute between a leading AI company and the US defense establishment.
The conflict centers on the Trump administration’s decision to classify the company as a “national security supply chain risk,” a designation that effectively directs government agencies to cut ties with the firm.
Anthropic has argued that the label is legally unjustified and represents retaliation for the company’s refusal to modify safety safeguards built into its AI system Claude. According to Amodei, the Pentagon warned the company that it could lose government contracts or face the supply chain risk designation if it did not remove restrictions limiting certain uses of its technology.
The company said two potential applications have never been included in its defense contracts: mass domestic surveillance and fully autonomous weapons. Amodei said Anthropic believes large-scale surveillance poses serious risks to civil liberties, while fully autonomous weapons remain unsafe because current frontier AI systems are not yet reliable enough to operate without human oversight.
In a public statement, Amodei said the Pentagon had indicated it would only contract with AI providers that allow “any lawful use” of their systems and remove safeguards restricting certain applications. He added that officials also suggested invoking the Defense Production Act to force the company to remove those protections.
Despite the pressure, Amodei said the company would not change its stance. “We cannot in good conscience accede to their request,” he wrote, adding that Anthropic had instead offered to collaborate with the Defense Department on research aimed at improving the reliability of AI systems for defense applications.

























