The Department of Defense barred contractors from using Anthropic’s AI systems after the company declined to permit government use of its models for all lawful purposes.
In a statement on X, Secretary of War, Pete Hegseth, ordered the The U.S. Department of War to designate AI developer Anthropic a “supply chain risk,” restricting the use of the company’s AI systems in defense contracts after a dispute over how its models could be used by government agencies.
Anthropic confirmed the designation in a statement published on March 5, saying it received notice from the Department of War that the company had been classified as a supply chain risk, a move that affects the use of its Claude AI model in defense programs.
The dispute centers on federal contracting terms requiring AI providers working with the U.S. government to allow their systems to be used for any lawful purpose. Anthropic said it had objected to that condition.
In a statement explaining its position, the company said the government required AI vendors to “accede to ‘any lawful use’ and remove safeguards” when supplying models to federal agencies. The company said it requested two exceptions to that requirement: domestic surveillance of U.S. citizens and the development of fully autonomous weapons systems.
“The Department of War has stated they will only contract with AI companies who accede to ‘any lawful use’ and remove safeguards,” Anthropic said in a statement published on its website.
Anthropic said it supports national security applications of AI but declined to remove those restrictions from its usage policies.
The supply chain risk designation prevents defense contractors from using Anthropic’s AI systems in Department of Defense programs unless the classification is lifted. The supply chain risk designation is used by the Pentagon to limit the use of technology providers considered to be pontential security risks.
Anthropic CEO Dario Amodei said the company plans to challenge the designation in court.
“We intend to challenge the designation and defend the safeguards we believe are necessary,” Amodei said in a company statement.
Anthropic said the dispute does not affect most commercial deployments of its Claude models but could restrict their use in national security and defense contracts.
The conflict highlights an emerging issue in federal AI procurement: whether developers can impose limits on how their models are used when supplying technology to government agencies. The Pentagon has indicated that contractors must ensure systems used in defense programs can be deployed for any lawful government purpose.
The legal challenge filed by Anthropic is expected to determine whether the federal government can require such conditions in AI procurement contracts.