Today, Caitlin Kalinowski, a hardware executive, revealed her resignation from her leadership role within the company’s robotics team. This decision came in response to OpenAI’s disputed agreement with the Department of Defense.
“This was a difficult decision,” Kalinowski shared in a social media update. She added, “AI holds a crucial role in national security. Yet, the monitoring of Americans without legal oversight and autonomous lethal capabilities devoid of human authorization are boundaries that warranted more thoughtful consideration than they received.”
Kalinowski, who previously oversaw the team creating augmented reality glasses at Meta, became part of OpenAI in November 2024. In her statement released today, she underlined that her choice was “rooted in principle, not individuals,” conveying her “profound respect” for CEO Sam Altman and the OpenAI team.
In a subsequent post on X, Kalinowski further clarified, “My primary concern is that the announcement was hurried without the necessary protective measures being delineated. This represents, first and foremost, a governance issue. These subjects are too critical for agreements or declarations to be expedited.”
An OpenAI representative verified Kalinowski’s exit to TechCrunch.
“We maintain that our accord with the Pentagon forges a practical pathway for the ethical utilization of AI in national security, while distinctly outlining our strict limitations: a ban on domestic surveillance and autonomous weaponry,” the firm declared in a statement. “We understand that individuals hold deeply felt perspectives on these subjects, and we commit to ongoing discussions with staff, governmental bodies, civil society, and global communities.”
OpenAI’s accord with the Pentagon was disclosed just over a week ago, occurring after talks between the Pentagon and Anthropic failed when the AI firm attempted to negotiate for safeguards to prevent its technology’s application in widespread domestic surveillance or entirely autonomous weaponry. Consequently, the Pentagon categorized Anthropic as a supply-chain hazard. (Anthropic stated it would contest this classification in court; concurrently, Microsoft, Google, and Amazon confirmed they would keep Anthropic’s Claude accessible to non-defense clientele.)
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Following this, OpenAI swiftly unveiled its own agreement, authorizing the use of its technology in secure, classified settings. As its executives sought to elaborate on the arrangement across social media, the company portrayed it as embracing “a more comprehensive, multi-tiered strategy” that hinges not merely on contractual wording, but also on technical safeguards, to uphold limitations mirroring those of Anthropic.
Nevertheless, this contentious issue appears to have tarnished OpenAI’s standing among some users, with ChatGPT uninstallations skyrocketing by 295% and Claude ascending to the zenith of the App Store charts. As of Saturday afternoon, Claude and ChatGPT continue to hold the positions of the U.S. App Store’s top two free applications, respectively.
{content}
Source: {feed_title}

