Anthropic initiated a federal legal action against the US Department of Defense and various other federal bodies on Monday, contesting its classification of the artificial intelligence firm as a “supply-chain hazard.”
Last week, the Pentagon officially penalized Anthropic, concluding a disagreement that had been openly debated for several weeks regarding restrictions on deploying its generative AI technology for military uses, including self-governing weaponry.
“We do not believe this measure is legally sound, and we perceive no alternative but to contest it in court,” Anthropic CEO Dario Amodei expressed in a blog entry on Thursday.
The legal challenge, lodged in a federal tribunal in California, sought for a judge to revoke the designation and prevent federal entities from enforcing it. “The Constitution does not permit the government to wield its immense authority to penalize an enterprise for its protected expression,” Anthropic asserted in the submission. “Anthropic turns to the judiciary as a final recourse to uphold its entitlements and halt the Executive’s unlawful retaliatory campaign.”
Anthropic is also pursuing a provisional restraining order to maintain its government sales. The firm suggested that the government respond to this request by 9 pm Pacific on Wednesday and that a judge conduct a hearing on the matter on Friday.
The AI startup, which develops a collection of AI models known as Claude, confronts the prospect of forfeiting hundreds of millions of dollars in yearly earnings from the Pentagon and the broader US government. It also stands to lose the patronage of software companies that integrate Claude into services they offer to federal agencies. Reportedly, several Anthropic clients have indicated they are exploring alternatives due to the Defense Department’s risk assessment.
Amodei conveyed that the “vast majority” of Anthropic’s clientele will not need to implement alterations. The US government’s classification “plainly pertains solely to the utilization of Claude by customers as a direct component of agreements with the” military, he stated. General deployment of Anthropic technologies by military contractors ought to remain unaffected.
The Department of Defense, also recognized as the Department of War, declined to offer comments concerning Anthropic’s lawsuit.
White House spokesperson Liz Huston informed WIRED on Friday that “our military will adhere to the United States Constitution—not the stipulations of any ‘woke’ AI corporation.” She further mentioned that the administration is guaranteeing its “brave service members possess the suitable instruments they require to succeed and will ensure they are never constrained by the ideological caprices of any Big Tech executives.”
Lawyers proficient in government contracting contend that Anthropic faces a formidable struggle in court. The regulations that empower the Department of Defense to categorize a technology firm as a supply-chain risk do not provide ample avenues for an appeal. “It is entirely within the government’s discretion to establish the parameters of a contract,” remarks Brett Johnson, a partner at the law firm Snell & Wilmer. The Pentagon, he adds, also possesses the prerogative to articulate that a product of concern, if utilized by any of its vendors, “impedes the government’s capacity to fulfill its mission.”
Anthropic’s strongest prospect for judicial success could be demonstrating it was uniquely targeted, Johnson proposes. Shortly after Defense Secretary Pete Hegseth declared his intention to designate Anthropic a supply-chain risk, competitor OpenAI announced it had secured a new agreement with the Pentagon. This could prove crucial to Anthropic’s legal argument if the company can illustrate it was seeking comparable terms to the ChatGPT developer.
OpenAI affirmed that its agreement incorporated contractual and technical mechanisms to guarantee its technology would not be employed for extensive domestic surveillance or to direct autonomous weapon systems. It further expressed opposition to the action against Anthropic and stated it did not comprehend why its rival could not achieve the same arrangement with the government.
Military Imperative
Hegseth has given precedence to military adoption of AI technologies, with placards recently visible in the Pentagon depicting him gesturing and bearing the message, “I want you to use AI.” The contention with Anthropic arose in January after Hegseth commanded several AI providers to consent that the department was at liberty to utilize their technologies for any legal objective.
Anthropic, the sole company currently furnishing AI chatbot and analytical tools for the military’s most sensitive applications, resisted. It asserts that its technologies are not yet sufficiently advanced to be deployed for widespread domestic surveillance of Americans or fully autonomous armaments. Hegseth has claimed Anthropic desires veto authority over judgments that ought to be reserved for the Defense Department.
{content}
Source: {feed_title}

