OpenAI is engaged in discussions with the US defense department to establish further protective measures. These initiatives are aimed at averting widespread monitoring of American citizens through its AI, as the company proceeds to execute the agreement that was rapidly unveiled on Friday.
The AI start-up founded by Sam Altman has already modified the language in its contracts concerning surveillance and aims to incorporate additional safeguards during the three-month period designated for the agreement’s execution.
Legal experts and staff have closely examined phrasing within the contract that prohibits “intentional”, “deliberate”, or “targeted” surveillance, according to individuals familiar with the conversations.
They have voiced apprehensions that the government could conduct oversight on Americans “incidentally” or “unintentionally” by utilizing modern AI tools, these sources added.
“What remains to be determined is the operationalization of [these contracts],” commented a person close to OpenAI.
The subsequent phase will address questions extending “beyond the precise wording of the contracts,” including where the technology will be deployed and the technical safeguards governing when AI models might decline to follow certain instructions.
“The challenge for OpenAI lies in creating a product that remains functional yet avoids engaging in unsafe actions,” this individual further elaborated.
This endeavor to introduce protections during the implementation of the Pentagon deal arises as OpenAI has consistently sought to clarify the terms of its contract and alleviate concerns, including those from its own personnel, regarding the potential misuse of the $730 billion start-up’s potent AI.
OpenAI’s strategy diverges from that of its competitor, Anthropic, which has declined to accept contract provisions due to anxieties about surveillance.
Altman has conceded that the swiftness to finalize a deal following Anthropic’s spectacular collapse of talks on Friday “appeared opportunistic and disorganized.”
Anthropic’s chief executive, Dario Amodei, criticized OpenAI’s “deceptive” communication regarding its initial contract in an internal memo to staff, a story first reported by The Information on Wednesday.
He accused Altman of “manipulating” his company by “endeavoring to undermine our stance while seemingly endorsing it,” as per the memo dispatched to staff on Friday.
Altman announced updates to the ChatGPT maker’s contract on Monday. These revisions “forbid deliberate tracking, surveillance, or monitoring of US persons or nationals, including through the acquisition or use of commercially obtained personal or identifiable data.”
Intelligence agencies, such as the National Security Agency—whose collection of extensive metadata from the phones of ordinary Americans was exposed by Edward Snowden in 2013—would also be excluded from this arrangement, he further stated.
Connie LaRossa, OpenAI’s US national security policy lead, stated on Wednesday that the terms for safeguards to protect against surveillance “are still under negotiation.”
OpenAI confirmed that its agreement with the Pentagon had been signed and that “we believe the new updates from Monday were significant. We will be collaborating closely with the department during this implementation stage.”
The Pentagon did not respond to a request for comment.
Defense Secretary Pete Hegseth has maintained that AI companies must make their technology accessible for “all legitimate purposes.”
In discussions with the Pentagon, Amodei advocated for assurances that its AI could not be utilized for domestic mass surveillance or in lethal autonomous weaponry.
The Anthropic CEO wrote in his memo that under the current Pentagon policy, established during Joe Biden’s administration, “a human must remain involved in the deployment of a weapon. However, that policy can be unilaterally altered by Pete Hegseth, which is precisely our concern.”
Amodei also insisted on a provision prohibiting agencies from gathering extensive public datasets and employing Anthropic’s tools to analyze them, according to a source privy to the discussions. He contended that while doing so might be legal, it could amount to widespread domestic monitoring.
OpenAI has contended it could uphold the same restrictions on surveillance and autonomous weapons through technical measures, such as its own model safeguards, and by ensuring that OpenAI employees remained “involved” and collaborated with officials.
Amodei dismissed those protective measures. “The methodologies [OpenAI] is adopting mostly prove ineffective: the primary reason [OpenAI] accepted them and we did not is that they prioritized appeasing employees, while we genuinely focused on preventing abuses,” he penned.
Legal professionals suggest there is a lack of clarity and certainty in the existing surveillance law, placing AI laboratories in a challenging predicament.
“Due to the absence of a clear legal and policy framework, companies presumed that no policy, nothing, no framework existed,” remarked one former high-ranking defense official.
Civil liberties advocates have argued that the current frameworks are inadequate, as legislation trails behind technological advancements.
Mieke Eoyang, former deputy assistant secretary of defense for cyber policy and a visiting professor at Carnegie Mellon University, also noted questions regarding “whether or not, in this administration, they are acknowledging that level of already embedded protection within the system.”
Two former US government officials indicated that the White House had not publicly committed to existing legal frameworks designed to prevent the use of AI from infringing upon civil liberties.
One former senior defense official highlighted the fact that the administration had not specified whether it had maintained or revoked the AI National Security Memorandum policy, which established guidelines to prevent AI from violating civil liberties or human rights.
Paul Nakasone, a former NSA director and ex-head of US Cyber Command, who now serves on OpenAI’s board, stated at an event on Monday: “Our fundamental characteristic as a populace [is] to always view government monitoring as detrimental.”
“We must cultivate that trust regarding the National Security Agency, our intelligence community, to be capable of undertaking these kinds of missions with the assurance that our actions adhere strictly to the letter of the law,” he concluded.

