- Microsoft backs Anthropic lawsuit against Pentagon supply-chain risk designation.
- Filing supports temporary restraining order blocking Defense Department action.
- Dispute centers on Anthropic limits on autonomous weapons use.
- Case highlights tensions over military use of artificial intelligence.
Microsoft has submitted a legal brief on behalf of artificial intelligence startup Anthropic in its effort to have the U.S. Department of Defense designate the company as a supply-chain risk, in the short term, intensifying a controversy around the application of AI technology in military operations.
In the still pending federal court case in San Francisco, the brief, which was filed on Tuesday, supports the petitions of Anthropic to have a temporary restraining order on the designation of the Pentagon until the case goes through.
Anthropic had lodged its suit in the first half of this week to intercept the move of the U.S. government to include the company in a national security blacklist. The source of conflict is the refusal of the startup to drop the precautions restricting the application of its AI technology to autonomous weapons or domestic spying.
Microsoft added that it has a personal interest in the result as the Claude AI system developed by Anthropic is embedded into technologic items supplied by Microsoft to the U.S army.
Technology Industry Fear Over AI policy row
The Microsoft filing claimed that granting the effectuation process to a designation by the Pentagon without a court trial would disturb the technology systems in use as far as the government contractors and defense agencies had already been already.
In its application to court, the company elucidated that entering into such a move without the insertion of a temporary restraining order will compel Microsoft and other government contractors skilled in building solutions to support U.S. government operations to consider an additional risk in their business planning.
Microsoft also was also cautious that the contractors using the Anthropic systems might experience unexpected changes in its operation in case the name is not changed.
The company claimed that the Defense Department gave itself six months to transition out of the technology of Anthropic but failed to give a similar transition time to the privately contracted commercial providers of services to the military.
"Microsoft is not filing this brief out of altruism. Claude is embedded in defence contracts worth billions. If Anthropic disappears from the approved vendor list overnight, Microsoft's entire government-facing AI stack gets an unplanned hole in it. That is the real argument here."
According to the legal experts, more often than not, courts would grant companies an opportunity to present amicus briefs in matters which may have an impact to their businesses or even the regulations governing their businesses.
Also Read: Meta To Charge Location-Based Advertising Fee In Europe From July 1
The judge in charge of the case will have to approve first the request of Microsoft to make the official filing.
The Wider Industry Support is introduced
Other units of the technology industry have come up in favor of Anthropic in this legal issue. On Monday, OpenAI and Google succeeded in getting a separate amicus brief of 37 engineers and researchers supporting the position of the AI company.
According to the group, the action by Pentagon might deter a candid discourse in the AI research community on the dangers and reasonable constraints of sophisticated artificial intelligence systems.

Anthropic has argued that the existing AI systems cannot be trusted yet enough to implement fully autonomous weapons systems and that they should not be employed to carry out mass surveillance on civilians.
Microsoft declared that temporary restraining order would enable time to engage in discussions between Anthropic and the United States government besides ensuring the access to advanced technology within defense extreme uses.
On Tuesday, shares of Microsoft traced higher and closed at 426.58 gaining 0.7 percent compared to the prior trading day, whereas Alphabet closed at 182.74 and gained 0.6 percent as indicated by Reuters.
High-Stakes Test to AI Governance
The controversy demonstrates an increasing conflict on how artificial intelligence is being used in the national security process between the government agencies and technology firms. Claude system developed by Anthropic is popular in commercial software and cloud-based AIs and the company is rapidly becoming one of the developor of large language models.
Analysts believe that the result of the suit might lead to the way AI companies bargain with governments that are aiming to have greater control over the application of new technology.
"This case is the first time a major AI company has gone to court to defend an ethical use policy against a government trying to remove it. Whatever the judge decides, it sets a precedent for every AI developer that has ever written a line in their terms of service saying their model cannot be used to kill people autonomously."
Also Read: Sony Faces $2.7 Billion Lawsuit Over Inflated PlayStation Game Prices in UK
The case is also indicative of the broader arguments in the technology sector regarding the way the interests of national security can be reconciled with ethical issues concerning AI use.
Meanwhile, the federal court will have to determine the provision of the temporary restraining order wanted by Anthropic until the case on the broader legal conflict takes place.