Following a protracted dispute between Anthropic and the Pentagon lasting several weeks, the company secured a crucial success: A judge issued a preliminary injunction in Anthropic’s lawsuit, which aimed to overturn its governmental blacklisting while the legal process unfolds.
“Official records from the Department of War indicate it classified Anthropic as a supply chain risk owing to its ‘adversarial conduct in the press,’” noted Judge Rita F. Lin, a federal judge for California’s northern district, in her ruling, which will commence in one week. “To penalize Anthropic for exposing the government’s procurement practices to public scrutiny constitutes a quintessential instance of unlawful First Amendment reprisal.”
The ultimate judgment may still be several weeks or months away.
Danielle Cohen, Anthropic’s representative, stated in a Thursday communiqué, “We express our appreciation to the court for its expeditious handling, and we are gratified that it agrees Anthropic is likely to prevail on the substantive claims. While this legal proceeding was essential to defend Anthropic, our clients, and our associates, our principal aim remains to cooperate effectively with the government to guarantee that all U.S. citizens benefit from secure, trustworthy AI.”
“I believe this matter addresses a significant discussion,” Judge Lin remarked during Tuesday’s proceedings. “From one viewpoint, Anthropic maintains that its artificial intelligence product, Claude, should not be employed for autonomous lethal weaponry and internal mass surveillance. Anthropic’s assertion is that if the government intends to use its technology, it must consent to avoid those specific applications. Conversely, the Department of War argues that military leaders ought to determine the appropriate applications for its AI capabilities.”
Continuing on Tuesday, Judge Lin articulated, “It is not within my jurisdiction to ascertain the correct side of that contention… The Department of War is responsible for selecting and procuring the AI product it wishes to deploy. Furthermore, every entity, including Anthropic, concedes that the Department of War has the prerogative to discontinue using Claude and identify a more accommodating AI supplier.” She concluded, “I view the fundamental question in this matter as … whether the government breached legal statutes by overstepping those bounds.”
The genesis of these events was a memorandum disseminated by Defense Secretary Pete Hegseth on January 9, which stipulated that language permitting “any lawful use” must be integrated into all AI services procurement contracts within 180 days, including existing agreements with companies such as Anthropic, OpenAI, xAI, and Google. Anthropic’s protracted discussions with the Pentagon persisted for weeks, centered on two non-negotiable conditions regarding how the military should not employ its AI: for domestic mass surveillance and for lethal autonomous weapons (defined as AI systems capable of eliminating targets without human involvement in the decision-making process). The subsequent unpredictable chain of events involved a deluge of social media affronts, an official “supply chain risk” classification with the potential to severely impede Anthropic’s commercial activities, competitor AI companies rapidly securing new arrangements, and the resulting litigation.
Through its legal action, Anthropic contends it was penalized for speech protected by the First Amendment, and it is seeking to invalidate the supply chain risk designation.
It is uncommon, and possibly unprecedented until this juncture, for an American corporation to be identified as a supply chain risk, a label usually reserved for non-U.S. entities potentially affiliated with foreign adversaries. Anthropic’s classification as such elicited nationwide surprise and sparked bipartisan disagreement, fueled by apprehensions that dissent against a presidential administration could potentially result in excessive penalties for a business in any industry.
Anthropic’s commercial endeavors have been considerably hindered by this classification, according to its court submissions, which indicate it has “received communications from numerous external collaborators … articulating confusion about their requirements and worry regarding their capability to persist in working with Anthropic,” and further that “dozens of companies have reached out to Anthropic” for counsel or details concerning their rights to cease usage. Should the government extensively forbid its contractors from engaging with Anthropic, the company claimed that revenue totaling between hundreds of millions and multiple billions could be imperiled.
During the Tuesday hearing, both organizations were given the opportunity to answer Judge Lin’s questions, which had been published in a document the day before and centered on issues such as whether Hegseth lacked the power to issue specific instructions and the reason Anthropic was labeled a supply chain risk. The judge also inquired, within her previously circulated questions, about the situations in which a government contractor might face contract termination for utilizing Anthropic’s technology in their projects — for example, “if a contractor for the Department employs Claude Code as a utility to create software for the Department’s national security frameworks, would that contractor consequently be dismissed?”
On Tuesday, the judge also appeared to reprove the Department of War regarding Hegseth’s X post, which, according to Anthropic’s previous court submissions, incited widespread confusion, by declaring that “effective at once, no contractor, vendor, or partner conducting operations with the United States military is permitted to engage in any commercial undertakings with Anthropic.”
“You are present here stating, ‘We articulated it, but did not truly intend it,’” Judge Lin commented during the proceedings, later emphasizing the query of why Hegseth composed the foregoing statement prohibiting contractors from engaging with Anthropic, as opposed to simply categorizing Anthropic as a supply chain risk.
During a succession of inquiries on Tuesday, Judge Lin questioned if the Department of War intended to end contracts with vendors due to their collaboration with Anthropic, even if distinct from their departmental duties. A spokesperson for the Department of War replied, “Such is my perception.”
Judge Lin then posed, “Consider, for instance, a defense contractor. I do not furnish IT services to the armed forces; instead, I supply sanitary paper. My contract will not be abrogated for employing Anthropic — is this correct?” The representative for the Department of War responded, “Regarding duties outside the DoW’s scope, that is my grasp.” However, upon the judge’s inquiry if a military contractor offering IT services to the Department of War, but not for national defense networks, might face dismissal for using Anthropic, the Department of War’s representative offered no definitive response.
At the proceedings, Judge Lin referenced an amicus curiae submission, which, she noted, employed the expression “attempted corporate murder.” She commented, “I am uncertain if it is truly ‘murder,’ but it appears to be an endeavor to incapacitate Anthropic.”
“This mandate persistently inflicts irreversible harm upon us,” declared Anthropic’s legal representative during the hearing, referencing Hegseth’s extensive nine-section online commentary.
In a recent legal submission, the Department of Defense contended that Anthropic might conceivably “seek to deactivate its system or beforehand modify its model’s performance either prior to or amidst current military engagements” should it perceive the armed forces exceeding its defined limits — a hypothetical circumstance which the Pentagon stated it regarded as an “intolerable peril to national defense.” The judge’s preliminary questions appear to dispute that assertion, or at minimum, seek further clarification, by posing, “Which proof in the documentation indicates Anthropic retained persistent access to or authority over Claude following its transfer to the government, thereby allowing Anthropic to perpetrate such acts of undermining or rebellion?”
{content}
Source: {feed_title}

