The former Trump administration asserted in judicial documents submitted on Tuesday that it had not infringed upon Anthropic’s First Amendment entitlements by classifying the artificial intelligence firm as a supply-chain hazard, further forecasting that the company’s legal action against the government would be unsuccessful.
“The First Amendment does not grant permission to arbitrarily dictate agreement stipulations upon the state, and Anthropic presents no evidence to substantiate such an extreme claim,” stated legal representatives for the US Department of Justice.
This rejoinder was lodged at a federal tribunal in San Francisco, one of two locations where Anthropic contests the Pentagon’s ruling to penalize the firm with a classification that may exclude businesses from defense procurements due to apprehensions regarding prospective security weaknesses. Anthropic asserts the Trump government exceeded its mandate by imposing the designation and obstructing the enterprise’s systems from utilization within the division. Should the classification persist, Anthropic might forfeit as much as billions of currency units in anticipated earnings for the current year.
Anthropic seeks to continue normal operations pending the resolution of the legal dispute. Rita Lin, the presiding magistrate for the San Francisco legal matter, has arranged a session for the upcoming Tuesday to determine if Anthropic’s plea should be granted.
Legal counsel for the Department of Justice, representing the Defense Department and additional governmental bodies in the documents submitted on Tuesday, characterized Anthropic’s apprehensions regarding the prospect of commercial loss as “insufficient in legal terms to qualify as irreversible harm” and urged Judge Lin to decline the firm’s request for temporary relief.
The legal representatives further stated that the Trump government was prompted to intervene due to “worries regarding Anthropic’s prospective future behavior should it maintain entry” to state technological infrastructures. “No entity has claimed to curb Anthropic’s communicative endeavors,” they penned.
The authorities contend that Anthropic’s endeavor to restrict the manner in which the Pentagon may employ its artificial intelligence innovations prompted Defense Secretary Pete Hegseth to “logically” conclude that “personnel from Anthropic could potentially compromise, deliberately insert undesirable features, or in other ways undermine the architecture, soundness, or functioning of a national defense framework.”
The Defense Department and Anthropic have been at odds regarding prospective limitations concerning the firm’s Claude artificial intelligence algorithms. Anthropic holds the view that its algorithms ought not to be utilized to enable extensive monitoring of US citizens and presently lack sufficient dependability to operate entirely self-governing armaments.
Numerous jurisprudential authorities had previously informed WIRED that Anthropic possesses a compelling assertion that the logistical chain action constitutes unlawful reprisal. However, tribunals frequently lean towards state security contentions presented by the authorities, and Defense Department functionaries have characterized Anthropic as a service provider that has become insubordinate and whose innovations are deemed unreliable.
“Specifically, the Department of Warfare grew apprehensive that permitting Anthropic uninterrupted entry to DoW’s technical and tactical combat infrastructure would inject an intolerable hazard into DoW’s logistical networks,” the submission from Tuesday articulates. “Artificial intelligence platforms are highly susceptible to interference, and Anthropic might endeavor to deactivate its systems or proactively modify the conduct of its algorithm either prior to or amidst active combat engagements, should Anthropic—exercising its judgment—perceive that its organizational ‘boundaries’ are being infringed upon.”
The Department of Defense and various other governmental bodies are striving to substitute Anthropic’s artificial intelligence instruments with offerings from rival technology enterprises over the coming months. A primary application of Claude by the armed forces involves Palantir’s data analytics program, individuals acquainted with the situation have informed WIRED.
In the submitted papers on Tuesday, the legal representatives contended that the Pentagon “is unable to merely activate a toggle at a juncture where Anthropic presently stands as the sole artificial intelligence model authorized for deployment” on the agency’s “confidential platforms while intense military engagements are active.” The agency is striving to implement artificial intelligence frameworks from Google, OpenAI, and xAI as substitute options.
Several corporations and collectives, comprising artificial intelligence investigators, Microsoft, a federal workforce trade organization, and ex-military commanders, have lodged judicial memoranda to back Anthropic. No such documents have been submitted endorsing the authorities.
Anthropic is allotted until Friday to submit a rejoinder to the authorities’ contentions.
{content}
Source: {feed_title}

