Gain access to the Editorial Summary without charge
Roula Khalaf, the FT’s Editor, curates her preferred articles in this weekly bulletin.
Anthropic has declared that the Pentagon’s classification of the AI research facility as a procurement channel risk will not impact the “overwhelming proportion” of its clients, as it pledged to contest the ruling in court.
The military agency has formally notified Anthropic that the entity is now perceived as a danger to defense procurement networks, as verified by the company’s chief executive, Dario Amodei, on Thursday.
This action signifies an intensification of the dispute between the Pentagon and one of America’s prominent AI research centers concerning the conditions regulating its technology’s deployment by the armed forces. It occurred even though its AI systems are still being utilized in operations, including during the American conflict with Iran.
The classification as a supply chain risk has generated apprehension regarding Anthropic’s business collaborations with several entities, such as Amazon, which also work with the Pentagon. This mandates Anthropic’s associates to sever connections with the company on military agreements.
A broad implementation of the directive could have profoundly affected Anthropic’s income, which has soared to $19bn on an annualised basis, and potentially its entry to crucial data center infrastructure.
However, according to Amodei, the defense department has narrowed the reach of the order “to the utilization of Claude by customers as an integral component of contracts with the Department of War, not every instance of Claude’s use by customers who possess such contracts”.
This classification is typically set aside for firms originating from nations like China and Russia, which are considered US adversaries.
Amodei declared on Thursday that the company “[does] not consider this step legally justifiable and we have no option but to dispute it in legal proceedings”.
Unaffiliated legal specialists have also expressed doubts regarding whether the company’s classification as a national security risk would withstand judicial examination.
After discussions concerning the conditions for Anthropic’s collaboration with the military broke down last Friday, Secretary of Defense Pete Hegseth warned of comprehensive measures targeting the $380bn nascent firm.
Hegseth stated that “effective immediately, no agreement holder, vendor, or associate engaging in commerce with the United States military is permitted to conduct any business dealings with Anthropic”.
Amodei had declined to yield on two “strict boundaries” banning the deployment of his company’s AI system Claude in deadly self-governing armaments and widespread internal monitoring.
On Thursday, a high-ranking representative at the agency stated that “from the outset, this has centered on a singular, fundamental principle: the military having the ability to employ technology for all legitimate objectives”.
The official added, “The armed forces will not permit a supplier to interpose itself within the command structure by limiting the legitimate deployment of an essential function and jeopardize our combatants.”
Amodei mentioned on Thursday that there had been “fruitful discussions” involving his firm and the Department of Defense “during the past few days”.
The circumstance was exacerbated on Wednesday by the release of a memo Amodei addressed to Anthropic staff.
In the 1,600-word missive, penned the previous Friday, Amodei charged the Pentagon with “outright falsehoods” and said he was excluded because Anthropic had not “offered sycophantic commendation to [President Donald] Trump”, unlike OpenAI CEO Sam Altman.
Subsequent to Anthropic’s declaration on Thursday, Emil Michael, under-secretary of defense for research and engineering, signaled that discussions had ceased.
“There is no ongoing . . . dialogue” between the Pentagon and Anthropic, Michael shared on X.
Amodei in the statement had expressed regret for the staff memorandum, stating: “It was a challenging period for the firm, and I regret the style of the message. It does not represent my thoughtful or deliberated opinions.”

