Late on Friday, OpenAI CEO Sam Altman disclosed that his company had finalized an agreement, granting the Department of Defense permission to deploy its AI models within the department’s confidential network.
This development succeeds a notable contention between the department — also termed the Ministry of War during the Trump administration — and Anthropic, a competitor to OpenAI. The Pentagon pressed AI companies, Anthropic included, to permit the use of their models for “all lawful purposes,” yet Anthropic aimed to define clear limits regarding extensive internal monitoring and entirely autonomous weaponry.
In an extensive declaration issued on Thursday, Anthropic’s Chief Executive, Dario Amodei, stated that the firm “had never voiced opposition to specific military actions nor sought to restrict the deployment of our technology in an ad hoc fashion,” but he contended that “in a constrained number of situations, we believe AI has the potential to erode, rather than bolster, democratic principles.”
Over 60 staff members from OpenAI and 300 from Google affixed their names to a public missive this week, imploring their respective companies to endorse Anthropic’s stance.
Following the inability of Anthropic and the Pentagon to finalize a pact, President Donald Trump lambasted the “Leftwing nut jobs at Anthropic” in a social media dispatch that additionally instructed federal bodies to cease utilizing the firm’s offerings after a six-month wind-down period.
In a distinct message, Defense Secretary Pete Hegseth asserted that Anthropic sought to “usurp veto authority concerning the operational choices of the United States military.” Hegseth further declared his intention to label Anthropic a supply-chain hazard: “Commencing forthwith, no contractor, vendor, or associate engaged with the United States military shall undertake any commercial dealings with Anthropic.”
By Friday, Anthropic stated it had “not, as of yet, been directly contacted by the Ministry of War or the White House regarding the standing of our discussions,” but maintained its resolve to “contest any supply chain hazard classification through legal means.”
Techcrunch gathering
Boston, MA
|
June 9, 2026
Remarkably, Altman asserted in a communication on X that OpenAI’s fresh defense agreement incorporates safeguards tackling the identical concerns which proved contentious for Anthropic.
“Among our foremost safety tenets are restrictions on widespread internal monitoring and human accountability for the application of force, encompassing autonomous weapon systems,” Altman stated. “The Ministry of War concurs with these tenets, embeds them in its statutes and guidelines, and we have integrated them into our accord.”
Altman mentioned that OpenAI “intends to construct technical protective measures to guarantee our models function appropriately, a requirement also desired by the Ministry of War,” and it will dispatch engineers alongside the Pentagon “to assist with our models and confirm their security.”
“We are requesting the Ministry of War to extend these identical conditions to all AI firms, which, in our estimation, we believe everyone ought to be prepared to embrace,” Altman appended. “We have articulated our profound wish for a de-escalation from legal and governmental interventions, moving instead towards sensible understandings.”
Sharon Goldman of Fortune conveys that Altman informed OpenAI personnel during an all-staff assembly that the administration would permit the firm to construct its distinct “safety framework” to avert improper utilization, and that “should the model decline to execute a function, the government would not compel OpenAI to perform said function.”
Altman’s dispatch emerged just prior to reports surfacing that the United States and Israeli administrations had initiated airstrikes on Iran, with Trump advocating for the toppling of the Iranian regime.
{content}
Source: {feed_title}
