Anthropic lacks the capacity to influence its generative AI system Claude once the US military has it operational, a company executive stated in a legal document on Friday. This declaration was made in response to allegations from the Trump administration concerning the company’s potential interference with its AI tools during armed conflict.
“Anthropic has never possessed the power to make Claude cease functioning, modify its capabilities, cut off access, or otherwise impact or endanger military actions,” Thiyagu Ramasamy, Anthropic’s head of public sector, affirmed. “Anthropic does not have the necessary access to disable the technology or alter the model’s behavior prior to or during ongoing operations.”
For several months, the Pentagon has been engaged in a dispute with the prominent AI research firm regarding the application of its technology for national security — and what limitations should be placed on such use. This month, Defense Secretary Pete Hegseth categorized Anthropic as a supply-chain vulnerability, a classification that will impede the Department of Defense from utilizing the company’s software, even through contractors, in the coming period. Other federal entities are also discontinuing their use of Claude.
Anthropic initiated two lawsuits challenging the constitutionality of the prohibition and is seeking an urgent court order to overturn it. Nevertheless, clients have already commenced canceling agreements. A court proceeding for one of the cases is slated for March 24 in the federal district court in San Francisco. A judge may render a decision on a temporary reversal shortly thereafter.
In a filing earlier this week, government lawyers wrote that the Department of Defense “is not obligated to tolerate the hazard that crucial military frameworks will be jeopardized at critical moments for national defense and active military engagements.”
WIRED reported that the Pentagon has been deploying Claude for data analysis, drafting memoranda, and aiding in the formulation of battle strategies. The government’s contention is that Anthropic could disrupt live military operations by deactivating access to Claude or pushing detrimental updates if the company disapproves of particular uses.
Ramasamy dismissed that notion. “Anthropic does not maintain any clandestine entry point or remote ‘kill switch’,” he penned. “Anthropic personnel cannot, for instance, log into a DoW system to modify or deactivate the models during an operation; the technology simply does not operate in that manner.”
He further elaborated that Anthropic would only be able to furnish updates with the endorsement of the government and its cloud service provider, which in this instance is Amazon Web Services, though he left it unnamed. Ramasamy added that Anthropic is unable to access the prompts or other data military users input into Claude.
Anthropic executives assert in court submissions that the company does not desire veto authority over military tactical determinations. Sarah Heck, the policy director, wrote in a court filing on Friday that Anthropic was prepared to guarantee as much in a contract put forward on March 4. “For the elimination of ambiguity, [Anthropic] comprehends that this license does not bestow or confer any prerogative to manage or reject legitimate Department of War operational decision‑making,” the proposal declared, according to the filing, which alluded to an alternative designation for the Pentagon.
The company was also ready to embrace phrasing that would address its apprehensions about Claude being employed to assist in executing lethal assaults devoid of human oversight, Heck affirmed. However, discussions ultimately failed.
Presently, the Defense Department has conveyed in court filings that it “is undertaking further measures to lessen the supply chain vulnerability” presented by the company by “collaborating with external cloud service vendors to ensure Anthropic leadership cannot make solitary modifications” to the Claude systems currently in existence.
{content}
Source: {feed_title}

