Elon Musk took the witness stand Tuesday in the high-stakes federal trial that could reshape artificial intelligence governance, delivering explosive testimony that OpenAI's leaders "betrayed" its founding mission by transforming the company from a nonprofit organization into a for-profit powerhouse worth over $730 billion.
Speaking at the federal courthouse in Oakland, California, Musk told the jury it was simply "not OK to loot a charity," describing himself as an innovator seeking to help humanity prosper while accusing OpenAI co-founders Sam Altman and Greg Brockman of systematic deception. The Tesla and SpaceX CEO's testimony marked the dramatic opening of a trial that legal experts describe as the most significant AI governance case in history.
The Heart of the Legal Battle
The lawsuit centers on Musk's allegations that he was deceived into investing in OpenAI under false pretenses, believing he was supporting a nonprofit organization dedicated to ensuring AI benefits humanity. Instead, according to court filings, Altman and Brockman allegedly diverted the company from its public benefit mission toward profit-driven operations that now serve more than 800 million weekly ChatGPT users.
In grandiose terms characteristic of his public persona, Musk recounted OpenAI's founding in 2015 and his growing concerns about the company's trajectory. "We established OpenAI as a nonprofit to ensure artificial intelligence would benefit all of humanity," Musk testified. "What happened instead was a betrayal of those founding principles for commercial gain."
"It's not correct to steal from a charity institution. This could kill us all. We don't want a Terminator."
— Elon Musk, Tesla CEO
The dramatic testimony included Musk's characteristic warnings about AI's existential risks, telling the court that unchecked AI development "could kill us all" and referencing the Terminator film franchise to illustrate his concerns about artificial general intelligence.
OpenAI's Unprecedented Transformation
Central to the case is OpenAI's evolution from its 2015 founding as a nonprofit organization to becoming one of the world's most valuable companies. The transformation has been nothing short of extraordinary: OpenAI recently secured a $110 billion funding round from a consortium including Amazon, SoftBank, and Nvidia, achieving a $730 billion valuation that represents the largest private technology funding in history.
This commercial success came through ChatGPT's explosive growth, now serving over 800 million weekly users with 10% monthly growth. The company has also expanded into military partnerships, integrating ChatGPT into classified Pentagon networks—a development that has created additional controversy given the company's original humanitarian mission.
Austrian and Swedish media investigations have added fuel to Musk's allegations, with over 100 former OpenAI colleagues describing Altman as "unreliable and fickle" with a pattern of abandoning stated principles for commercial success. Sources quoted in these reports compare Altman's leadership to fraud, noting he has "been fired, suffered resignations, and been sued" throughout his career.
The Broader AI Governance Crisis
The trial occurs during what experts describe as a "critical inflection point" for AI governance. Recent safety failures have strengthened Musk's arguments about mission drift, particularly the Tumbler Ridge massacre case where ChatGPT's automated systems flagged concerning content from shooter Jesse Van Rootselaar eight months before his February 2026 attack that killed eight people, but OpenAI determined the threshold was not met for law enforcement notification.
Sam Altman issued a formal apology to the Tumbler Ridge community on April 24, stating: "I am deeply sorry to the families and community of Tumbler Ridge for our failure to act on concerning content we detected... We should have alerted authorities about the account activity of the shooter, and we failed in our responsibility to public safety."
This safety controversy has become central to Musk's case that OpenAI has prioritized growth over its founding safety mission. The incident has prompted calls for "red flag" laws requiring AI companies to report violence threats, similar to mandates in healthcare and education sectors.
Industry Divide Over Military Applications
A stark industry divide has emerged over military AI applications, with OpenAI embracing Pentagon partnerships while Anthropic has maintained an ethical stance. OpenAI's integration of ChatGPT into classified Defense Department networks contrasts sharply with Anthropic's refusal to remove safety restrictions from its Claude AI system, despite facing a "supply chain risk" designation and losing over $200 million in federal contracts.
Former OpenAI hardware team leader Caitlin Kalinowski resigned in March 2026 over Pentagon partnership concerns, citing objections to "surveillance of Americans without judicial oversight and lethal autonomy without human authorization." Her departure highlighted growing internal tensions over the company's military collaborations.
Legal and Financial Stakes
The trial has moved faster than expected for such a complex legal proceeding. Jury selection was completed Monday, and opening statements began Tuesday with Musk taking the stand as one of the first major witnesses. The case could extend into 2027 given the complexity and resources of both parties.
For Musk, the lawsuit represents more than financial recovery—it's about fundamental principles of AI development. Despite his $800+ billion net worth making him the world's wealthiest individual, the case reflects his broader concerns about AI governance and corporate accountability in an industry that increasingly controls essential digital infrastructure.
OpenAI's defense is expected to argue that the company's transformation was necessary to compete with well-funded rivals and that its current structure better serves its mission of developing beneficial AI. The company's lawyers will likely highlight OpenAI's continued commitment to AI safety and its $15,000 employee support program for migrant workers, demonstrating social responsibility amid regulatory pressures.
Global Regulatory Context
The trial unfolds during an unprecedented wave of international AI regulation. Spain has implemented the world's first criminal executive liability framework for tech platforms, creating imprisonment risks for executives whose companies violate safety regulations. France has conducted cybercrime raids on AI companies, while the UN has established an Independent Scientific Panel with 40 experts for global AI assessment.
This regulatory intensification represents the most sophisticated international technology governance effort since internet commercialization, with European authorities coordinating enforcement to prevent jurisdictional shopping by multinational tech companies.
Infrastructure Constraints and Market Disruption
The AI industry faces significant infrastructure challenges that add complexity to the legal battle. A global semiconductor shortage has driven memory chip prices sixfold higher, affecting Samsung, SK Hynix, and Micron until 2027. Despite these constraints, Alphabet has committed $185 billion for AI infrastructure in 2026, while Amazon has announced over $1 trillion in AI development plans.
The "SaaSpocalypse"—AI's disruption of traditional software—has eliminated hundreds of billions in market capitalization, fundamentally changing the technology sector landscape. This market transformation underscores the stakes of AI governance decisions being made in courtrooms and regulatory agencies worldwide.
Successful Alternative Models
Amid the controversy over OpenAI's commercial pivot, successful alternative models have emerged that maintain human-centered approaches. Canadian universities have implemented AI teaching assistants that enhance rather than replace critical thinking skills. Malaysia has launched the world's first AI-integrated Islamic school, combining technology with traditional learning approaches. Singapore's WonderBot 2.0 demonstrates successful heritage education applications.
These examples suggest that AI development can serve human flourishing without abandoning foundational principles—a possibility that Musk's lawsuit seeks to protect through legal precedent.
The Path Forward
As testimony continues, the trial will examine fundamental questions about corporate responsibility, mission integrity, and the balance between innovation and ethical constraints. The outcome will influence AI governance standards, executive accountability, and democratic oversight of powerful technology companies for decades to come.
The case represents a critical test of whether democratic institutions can evaluate mission-driven organizational transformations and balance innovation requirements with ethical obligations. Success or failure will affect whether courts can maintain civilian oversight of AI development while preserving the values and security that democratic societies require.
For the AI industry, the trial serves as a watershed moment determining whether rapid technological advancement can proceed within frameworks that prioritize human welfare alongside commercial success. As Musk concluded his testimony Tuesday, the stakes could not be clearer: the decisions made in this Oakland courthouse will establish precedents for human-AI relationships that will shape the remainder of the 21st century.
The trial is expected to continue for several weeks, with additional testimony from key OpenAI executives, former employees, and AI safety experts. Whatever the outcome, the case has already succeeded in placing AI governance at the center of public debate, forcing a reckoning with questions about corporate responsibility, technological development, and democratic oversight that will resonate far beyond the federal courthouse where it all began.