Personnel at Amazon, Google, and Microsoft are imploring their executives to support Anthropic amidst its intensifying disagreement with the Pentagon, urging them to decline any agreements that might facilitate self-governing armaments or widespread internal monitoring.
According to a communiqué issued on Friday and reviewed by the FT, coalitions of workers, comprising thousands of technology professionals, declared their intent to resist any move to weaken the protective measures implemented by the AI nascent company, a stance taken after its CEO, Dario Amodei, refused what he characterized as a “concluding proposition” to persist in providing services to the American armed forces.
“It is our understanding that [the Pentagon] will swiftly endeavor to integrate alternative models devoid of these safeguards, irrespective of any attempts to compel Anthropic’s adherence,” the correspondence states.
“Our aim in writing is to implore our respective corporations to similarly decline compliance should they, or the cutting-edge research facilities they fund, engage in additional agreements with the Pentagon,” the missive conveyed.
This involvement expands a deadlock which has led US Secretary of Defense Pete Hegseth to menace with the cancellation of Anthropic’s agreements and contemplate removing it from military procurement networks unless it capitulates, thereby increasing the likelihood of competitors intervening.
Emil Michael, assistant secretary for defense research and development, indicated on Friday that a resolution to alleviate the tension remained achievable.
“I am receptive to further discussions, and I communicated this to them,” Michael informed Bloomberg TV, asserting that the Pentagon had already presented an offer incorporating “numerous compromises regarding the terminology Anthropic desired”. He stated that Hegseth was scheduled to render a verdict later that Friday.
Sam Altman, OpenAI’s leader, informed employees on Thursday evening that he was striving to mediate a reconciliation between competitor Anthropic and the Pentagon, as per two individuals acquainted with the situation, a development initially disclosed by the Wall Street Journal.
The disagreement is inciting a widespread rebellion throughout Silicon Valley, with ordinary engineers cautioning that employing self-governing AI for lethal military actions might transgress an unacceptable boundary.
“The top management at Google, Microsoft, and Amazon ought to decline the Pentagon’s overtures and offer employees clarity regarding agreements with other coercive governmental bodies,” the missive further states, indicating the Department of Domestic Security and Immigration and Border Control.
Among the individuals who endorsed the letter are the Communications Workers of America, with its 700,000 constituents, the Alphabet Workers Union, a contingent of DeepMind personnel situated in London, Amazon Employees for Climate Justice, and the overarching advocacy collective, No Tech for Apartheid.
Microsoft chose not to offer a statement. Google and Amazon failed to provide an instant reply.
Both Google and Amazon have channeled multi-billion-dollar capital into Anthropic, and Microsoft forged a $30bn agreement for cloud services with the creator of the Claude conversational AI in November. Furthermore, Microsoft possesses a 27 percent stake in OpenAI.
Currently, Anthropic stands as the sole AI entity sanctioned to engage in confidential operations, as per a government representative, who further remarked that Claude continued to be the optimal framework for military applications.
Excluding Anthropic from governmental agreements would create a business advantage for competitors. OpenAI, Google, and Elon Musk’s xAI, each having secured $200mn pacts with the defense agency in the preceding year, are collectively engaged in dialogues with the Pentagon to extend their involvement into covert assignments, as disclosed by individuals conversant with the discussions.
xAI is approaching a consensus that would permit the armed forces to utilize its Grok model without limitations. Google has yet to complete its confidential agreement nor has it articulated a public position on internal monitoring and self-governing armaments.
OpenAI had been deliberating an arrangement which would additionally feature technical waivers concerning “internal monitoring and self-directed aggressive weaponry,” Altman communicated to employees on Thursday.
AI scholars and leaders have voiced apprehension regarding the Pentagon’s menaces. They are troubled by the established example of prohibiting Anthropic from defense agreements due to supply-chain vulnerabilities, or of appropriating its paradigms.
“As a citizen of America, my least desire is for the government to deploy AI for extensive monitoring of its populace,” remarked Boaz Barak, a computer specialist affiliated with OpenAI, in an update shared on X.
Mark Chen, OpenAI’s head of research, informed the FT that the firm had not precluded entering into agreements with the Pentagon, yet his teams had engaged in productive discussions regarding the boundaries for AI implementation. He further mentioned that “various viewpoints” were articulated concerning what constituted permissible actions.
“We are unaware of what the opposing party signifies and . . . we must engage in an internal deliberation concerning it,” Chen stated. “It is not a dictate from above.”
Jeff Dean, Google DeepMind’s lead scientist, penned on Wednesday: “Extensive monitoring infringes upon the Fourth Amendment and stifles free speech.” He further affirmed his continued endorsement of a 2018 vow to prohibit deadly self-governing weaponry.
Over 100 Google employees dispatched a letter to Dean, imploring him to “exert all possible influence to prevent any agreement that breaches these fundamental boundaries,” as recounted to the FT by two individuals implicated.
Over 270 employees endorsed a public appeal to adopt an identical position to Anthropic, with the inclusion of over 60 individuals asserting their employment at OpenAI.
Personnel at Google’s AI research facility have urged the corporation broadly to back its colleagues in opposing the government’s requirements and uphold safeguards concerning their individual military agreements.
“I consider it intolerable to commandeer a laboratory to compel them to furnish deadly self-governing weaponry and extensive monitoring apparatuses,” an individual communicated in an update favored by hundreds of personnel within DeepMind’s private communication channel, viewed by the FT.
“As a sector, we ought to firmly support Anthropic,” they continued. “How can we champion our cause efficiently to guarantee our AI tenets remain steadfast in the face of such external coercion?”
In February of the preceding year, Google discreetly rescinded a commitment to refrain from employing AI technology for armaments or oversight from its moral directives. These had initially been implemented following a personnel uprising in 2018 concerning its defense agreements.
More lately, Google has stiffened its position regarding disagreement, dismissing 28 employees for objecting to a $1.2bn cloud computing agreement with the Israeli administration and armed forces.
“With a typical government, agreeing to all lawful applications would pose no issue,” remarked an individual acquainted with Anthropic’s viewpoint. “This is not a typical government, and the technological capability is vastly augmented.”

