Anthropic Sues Pentagon Over AI AI Blacklist, Says Unconstitutional; May Cost Billions of Defense Deals Revenue

Company says the designation is unconstitutional and could cost billions in revenue and defence contracts.

Pentagon
Pentagon reuters
  • Anthropic sues U.S. Defense Department over supply-chain risk designation.
  • Pentagon restricted use of Anthropic AI after dispute over safeguards.
  • Company says designation violates constitutional protections and harms business.
  • Case highlights tensions over military use of advanced AI systems.

Anthropic, an artificial intelligence startup, has filed a lawsuit to have the U.S. Department of Defense removed to a blacklist on its list of national security apparatus, a move that intensifies an emerging battle of high intensity regarding constraints placed on its use of AI to run a military operation.

The lawsuit was submitted on Monday in federal court, California, over the Pentagon ruling that its designation of Anthropic as a supply-chain risk following the company's refusal to disarm its systems with provisions limiting its application in fields like autonomous weapons and surveillance of citizens within the nation.

Anthropic presented that the designation was illegal and it contravened its constitutional safeguards.

These are unheard of activities and illegal. In its filing to the court, the company argued that the government cannot use its vast power to take punishment on a company because of its constitutional speech.

The Pentagon gave the name last week after months of discussions between the two parties on the limitations that were entrenched in the Anthropic AI model, Claude. The sources that are conversant with the discussions revealed that the model was applied in some operations that are somehow related to the military, even prior to what led to the escalation of the dispute.

This conflict underlines the increasing strains between the U.S. government and the major AI companies on the most advanced use of artificial intelligence regarding its implementation on national security operations.

The Pentygram Controversy brings doubts on the control of AI

The move was after the refusal by Anthropic to scale back its guardrails that were aimed at ensuring that its technology was not used to have fully autonomous weapons systems or in domestic surveillance operations.

The supply-chain risk designation was officially issued by Defense Secretary Pete Hegseth, restricting the Pentagon with the use of AI systems of Anthropic in the defense projects.

President Donald Trump too made a contribution to the controversy by breaking into social media that the federal government cease its use of Claude in its whole operation.

According to Axios, the White House is drafting an executive order that asks federal agencies to take off the Anthropic AI tools in government networks.

Anthropic
Department of War and Anthropic reuters

The designation has been claimed by Anthropic to be a major damaging brand in terms of the business conducted by the organization with federal agencies as well as defense contractors. Nevertheless, a legal challenge has not stopped the company since CEO Dario Amodei has pointed out that the company is still willing to renegotiate its business with the U.S. government.

Also Read: Australia Moves on Iran War, To Deploy Surveillance Jets, Supply Missiles To UAE

The Pentagon would not comment on the suit stating it was under litigation. The controversy is strongly perceived as an ordeal over who will ultimately have power over the deployment of powerful AI systems in matters of national security; be it the government agencies or the technology companies.

Risks of Revenue and Reputation increase

In court proceedings, executives at Anthropic indicated that such a move by the government might result in significant financial losses and interfere with contracts with customers in the public and private sectors.

Chief Financial Officer Krishna Rao estimated that the effect on revenues may be enormous assuming that the designation does not get removed.

The government action would cut down the revenue of the company of the 2026 by several billions of dollars depending on how probable a specific client is to make a maximum reading in the entire business of Anthropic, Rao said in the filing.

He put forward that the consequences might be highly irreversible. Provided that the name was not changed, according to Rao, then the business of the company might turn into the one which would be very difficult to restore.

Anthropic projections that hundreds of millions of dollars of estimated revenue linked to defence-related work may be jeopardized, and the label may also destroy investor trust and increase the cost of raising funds by the firm.

Thiyagu Ramasamy, head of operations in the public sector said that the move was already having its toll on business relations.

The government activities are harmful to Anthropic at once and beyond repair. The name also tarnishes the integrity and reputation of Anthropic as a reliable partner, he said.

According to Ramasamy, this is because over 150 million in recurring annual revenue relating to the defense contract may go away once the contractors choose to disassociate them with the firm.

Industry Support and More Implicates

The controversy has attracted the interest of the entire technology industry, whereby enterprises are currently weaving the line between the prospects of commercial expansion and the ethical issues related to military artificial intelligence.

An amicus filing in favor of the legal challenge by the OpenAI and Google was filed by a coalition of 37 researchers and engineers on February 10, 2022. Google chief scientist Jeff Dean was one of them.

The group claimed that the activities of the government have the potential to deter researchers when talking candidly about the risks and limitations of AI technologies.

The researchers filed their complaint by claiming that the government is limiting the potential of the industry to develop solutions by hushing down one lab.

Anthropic also took a second legal appeal to the U.S. Court of Appeals of the District of Columbia Circuit, claiming that the supply-chain risk designation would further apply to the entire federal government after undergoing an interreview by the interagency.

The case arrives when AI developers are increasing competition in regards to government contracts. The Defense Department has signed deals up to 200 million dollars with various large AI companies, such as Anthropic, OpenAI and Google.

Soon after the Pentagon action against Anthropic, Microsoft-funded OpenAI declared a different arrangement to install its AI technology on the systems in the Defense Department.

The CEO of OpenAI, Sam Altman, stated that the strategy of the Pentagon is consistent with the internal principles of the company that do not permit the automation of the system of weapons development but reject the mass surveillance of the population.

Also Read: Trump Makes Dramatic U-Turn and Says Iran War Is Not Over as He Orders Over Two Dozens B2 Stealth Bombers

The decision of the legal case filed by Anthropic may ultimately determine the way that AI creators will draw guardrails around military applications of their technology, and could future relationships between the government and individual AI companies as the battle to roll out advanced artificial intelligence gains pace.

READ MORE