OpenAI CEO Sam Altman finally took the stand this morning to defend himself against his former cofounder Elon Musk’s lawsuit challenging OpenAI’s corporate structure.
Key Takeaways
- Sam Altman staunchly defended OpenAI’s hybrid structure, refuting Elon Musk’s “stolen charity” claim by highlighting the foundation’s substantial growth and the complex financial journey to solidify its assets.
- Altman revealed fundamental disagreements over AI control and safety, citing Musk’s desire for personal succession in the event of his death and his management tactics that “demotivated” key researchers.
- Despite Musk’s current lawsuit, Altman testified that he consistently kept Musk informed and involved in OpenAI’s developments and investment rounds, even recalling a “good vibes meeting” about a Microsoft investment.
Altman Confronts Musk: A Battle Over AI’s Soul and OpenAI’s Destiny
The courtroom was electric this morning as OpenAI CEO Sam Altman stepped into the spotlight, directly confronting the formidable accusations leveled by his former cofounder, Elon Musk. The high-stakes legal battle centers on Musk’s claim that OpenAI’s shift from a pure non-profit to a hybrid model with a for-profit subsidiary constitutes a betrayal of its founding principles, particularly its commitment to AI safety and open access. Altman’s testimony offered a rare glimpse into the pivotal early days of the AI giant, painting a vivid picture of philosophical clashes and management differences that ultimately led to a dramatic parting of ways.
The “Stolen Charity” Allegation: Altman’s Defense of OpenAI’s Evolution
Musk’s attorneys wasted no time in presenting their core accusation: that OpenAI’s other founders “stole a charity” when they launched a for-profit subsidiary to commercialize their advanced AI models. This bold claim suggests a deliberate diversion from the organization’s initial altruistic mission. However, Altman met the allegation with a moment of thoughtful silence, acknowledging the gravity of the framing before delivering a firm rebuttal.
“It feels difficult to even wrap my head around that framing,” Altman stated, emphasizing his perspective. “We created one of the largest charities in the world. This foundation is doing incredible work and will do much more.” His defense highlighted the substantial growth of the OpenAI foundation, which board chair Bret Taylor later affirmed now commands assets on the order of $200 billion. Musk’s legal team, however, sought to undermine this by pointing out that the foundation reportedly had no full-time employees until earlier this year. Taylor countered this by explaining the significant challenge of converting OpenAI’s equity into liquid cash, a complex process that was finally accomplished with the organization’s most recent restructuring in 2025. This detailed explanation aimed to portray the foundation’s operational evolution as a practical necessity rather than a deliberate neglect of its non-profit roots.
Clashing Philosophies: Control, Safety, and the Future of AI
Beyond the corporate structure, the lawsuit delves into fundamental disagreements over the very ethos of AI development and control. Musk’s lawyers pressed on whether OpenAI’s commitment to safety had been compromised as its commercial power expanded. Altman, however, turned the narrative, recalling a critical period in 2017 when the founders wrestled with how to secure the immense funding required to power their ambitious AI models.
It was during this time that Altman expressed deep reservations about Musk’s approach to safety. He recounted a “particularly hair-raising moment” during a debate, where Musk was directly asked about the succession plan if he were to die while controlling a hypothetical OpenAI for-profit. In Altman’s telling, Musk’s response—”maybe OpenAI should pass to my children”—was a profound point of concern. For Altman, whose vision for advanced AI was explicitly dedicated to preventing its concentration in the hands of a single individual, this response signaled a core philosophical divergence. Drawing on his extensive experience running Y Combinator, the prominent startup accelerator, Altman noted that “founders who had control usually did not give it up,” reinforcing his apprehension about Musk’s desire for singular command.
A Tale of Two Management Styles: Innovation vs. “Chainsaw” Tactics
Altman’s testimony further illuminated a stark contrast in leadership and management philosophies between himself and Musk. He asserted that Musk’s management tactics, while potentially effective in engineering and manufacturing environments, proved ill-suited for the delicate culture of an AI research lab. “I don’t think Mr. Musk understood how to run a good research lab,” Altman declared, painting a picture of a demotivating work environment under Musk’s influence.
He went on to describe an incident that he claimed caused “huge damage for a long time to the culture of the organization.” Altman recounted that Musk had, at one point, “required Greg and Ilya to make a list of the researchers and list out their accomplishments and stack rank them and take a chainsaw through a bunch.” This graphic description underscored the perceived harshness and lack of understanding for creative research talent. Altman cast himself as a protector of the “sweat equity” of cofounders Greg Brockman and Ilya Sutskever, who were effectively running OpenAI’s research operations while both Musk and Altman held other significant roles. This distinction emphasized the value of the researchers’ intellectual contributions and the detrimental impact of a management style that failed to recognize it.
The Irony of Continued Engagement: Updates, Advice, and Memes
Despite the unresolved clashes and Musk’s subsequent departure from OpenAI’s board—leading him to launch competing AI initiatives at Tesla and his own startup, xAI—Altman testified to maintaining a surprisingly consistent line of communication with the mercurial businessman. He continued to update Musk on OpenAI’s progress and even sought his funding and advice, underscoring a complex, ongoing relationship.
This consistent engagement forms a crucial part of OpenAI’s defense. Its lawyers highlighted that Musk had been kept fully informed and even invited to participate in the very investments and structural changes that his current lawsuits now claim were corrupt or unlawful. Altman even recalled a particular discussion about a Microsoft investment in OpenAI in 2018, describing it as “unlike a lot of meetings with Mr. Musk, this was a good vibes meeting.” He fondly recounted Musk spending a “long conversation showing us memes on his phone,” an anecdote that, while seemingly trivial, starkly contrasts with the bitter legal battle now unfolding. It suggests a level of personal awareness and informal involvement that could complicate Musk’s narrative of being a deceived or sidelined party.
Bottom Line
Sam Altman’s testimony painted a comprehensive picture of OpenAI’s foundational years, repositioning the narrative from one of betrayal to one of necessary evolution and stark philosophical differences with Elon Musk. His account suggests that the current lawsuit is less about a “stolen charity” and more about a fundamental disagreement over AI’s governance, safety, and the commercial realities of developing groundbreaking technology. The testimony not only offered a defense of OpenAI’s corporate structure but also a vivid characterization of the complex personal and professional dynamics that shaped one of the most influential organizations in modern technology, ultimately challenging the core premises of Musk’s legal challenge.
Source: {feed_title}

