Judge Blocks Pentagon Move Against Anthropic In AI Weapons Dispute

Federal judge pauses Pentagon action against Anthropic over AI weapons dispute.

Anthropic
Department of War and Anthropic reuters
  • Federal judge grants injunction blocking Pentagon actions against Anthropic
  • Court questions designation of company as supply chain risk
  • Dispute centers on refusal to allow AI use in weapons
  • Case continues as court reviews legality of government actions

A federal judge in California on March 26 granted Anthropic a temporary injunction against the U.S. Department of Defense, pausing punitive measures tied to its refusal to support autonomous weapons use. The ruling centers on claims that the government violated the company's First Amendment rights by labeling it a supply chain risk. The dispute highlights growing tensions between AI developers and defense agencies over military applications of artificial intelligence.

Judge Rita Lin ruled that the government's designation of Anthropic as a "supply chain risk" was likely unlawful, ordering a halt to punitive measures while the case proceeds in the Northern District of California.

The injunction, which has been stayed for one week, comes amid a broader legal challenge filed by Anthropic earlier in March alleging violations of its constitutional rights.

"The Department of War provides no legitimate basis to infer from Anthropic's forthright insistence on usage restrictions that it might become a saboteur," Lin wrote in the ruling.

Claude AI Restrictions At Center Of Pentagon Dispute

The case centers on Anthropic's refusal to allow its Claude artificial intelligence model to be used in fully autonomous lethal weapons or domestic mass surveillance systems.

The Department of Defense, the U.S. agency responsible for national security and military operations, had moved to block government use of Anthropic's technology after labeling the company a supply chain risk.

Anthropic argued the designation was punitive and intended to coerce the company into changing its policies, potentially costing hundreds of millions or more in lost government contracts.

"These actions are unprecedented and unlawful," the company said in its complaint, adding that the government cannot use its authority to punish protected speech.

During court proceedings, Lin questioned why the Pentagon did not simply end its relationship with Anthropic rather than issue a broader designation that could affect the company across federal agencies.

"It looks like an attempt to cripple Anthropic," Lin said during the hearing.

Government Defense And Broader Implications

Government attorneys argued that statements by Defense Secretary Pete Hegseth suggesting contractors should avoid working with Anthropic did not carry legal force and therefore did not constitute irreparable harm.

When pressed on why such statements were made without legal backing, government lawyers said they did not know, according to the court proceedings.

The ruling has implications for federal agencies that have integrated Anthropic's systems into operations, including reported use in military analysis and targeting processes tied to the conflict with Iran.

Replacing such systems could prove complex, given the extent to which the technology has been embedded across government workflows.

The case underscores a growing divide between artificial intelligence companies and defense institutions over the acceptable boundaries of AI deployment in warfare and surveillance.

READ MORE