Anthropic filed a federal lawsuit seeking to overturn a U.S. Department of War decision labeling the firm a national security supply chain risk.
AI company Anthropic filed a federal lawsuit challenging the U.S. Department of War’s decision to designate the firm a “supply chain risk,” a move that bars military contractors from using the company’s technology in defense work.
The case, Anthropic v. Department of War, was filed in federal court and later appealed in Washington, according to court filings. The lawsuit asks the court to block the government’s designation and allow the company to continue supplying its AI tools to defense contractors while the dispute is resolved.
The dispute began last month after negotiations between Anthropic and the Pentagon broke down over how the military could use the company’s AI model, Claude. Anthropic sought limits preventing its systems from being used for mass domestic surveillance or fully autonomous weapons.
Following the impasse, Secretary of War Pete Hegseth directed the Department of War to designate the company a supply chain risk under federal acquisition security authorities. The designation allows the Pentagon to restrict vendors deemed to pose national security risks to military supply chains and sensitive systems.
In a public statement, Anthropic said it would contest the decision. The company said, “We will challenge any supply chain risk designation in court.”
The designation effectively prohibits defense contractors, suppliers, and partners that do business with the U.S. military from engaging in commercial activity with Anthropic that is connected to defense work.
According to the lawsuit, Anthropic argues the designation violates constitutional protections and was imposed without sufficient due process. The company also says the government’s action could harm its relationships with technology partners and federal customers.
The dispute comes as AI systems are increasingly deployed in national security applications, including intelligence analysis and operational planning. Anthropic’s Claude model was previously used in classified defense networks supporting military missions.
Industry partners have also weighed in on the case. Microsoft filed an amicus brief supporting Anthropic’s request for a temporary restraining order that would pause the Pentagon’s designation while the court reviews the matter. The company said the government’s action could disrupt suppliers that rely on Anthropic’s technology to support federal missions.
In announcing the designation, Secretary Hegseth said the decision was necessary to ensure the military maintains unrestricted access to AI tools used for lawful defense purposes.
“Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every lawful purpose in defense of the Republic,” Hegseth said in a public statement.
Anthropic responded that it supports lawful national security uses of AI but will not permit its systems to be used for mass surveillance of Americans or fully autonomous lethal weapons.
The legal challenge follows the Pentagon’s broader directive that federal agencies phase out the use of Anthropic’s technology in defense-related work.
The case remains pending in federal court as Anthropic seeks judicial review of the government’s designation and its impact on defense contracting.