Human
Zilis said she had major concerns about OpenAI’s board not being notified in advance of ChatGPT’s release.
Live updates from Elon Musk and Sam Altman’s court battle over the future of OpenAI
2 days ago
The Musk v. Altman trial turned sharply inward this week, as Neuralink executive and former OpenAI board member Shivon Zilis peeled back the curtain on the governance drama at the heart of the AI giant—raising questions about how one of the world’s most powerful technologies is actually being overseen.
According to live reporting from the courtroom, Zilis told the court that she had specific concerns about OpenAI CEO Sam Altman that she brought directly to the company’s nonprofit board. Those concerns, she testified, centered on two inflection points: the surprise launch of ChatGPT and a proposed, high‑stakes deal with nuclear energy startup Helion.
In broad strokes, her story is a familiar one in Silicon Valley: visionary founder races ahead; the board scrambles to keep up. What makes this different is that the product in question is frontier AI and the deal involves speculative nuclear fusion—both carrying stakes that extend far beyond quarterly earnings.
The first flashpoint came with the now‑legendary release of ChatGPT.
Zilis testified that the broad public release of ChatGPT “wasn’t discussed with the non-profit OpenAI board” beforehand and that this omission was formally addressed in a board meeting afterward. In a separate account of her testimony, she is quoted as saying she had “major concerns about OpenAI’s board not being notified in advance of ChatGPT’s release.”
She went further: Zilis said she and the “entire board had voiced extreme concern about that whole massive thing happening without any semblance of board communication.”
That choice of words—“massive thing,” “no semblance of board communication”—underscores just how far, in her view, management outpaced governance. The ChatGPT launch wasn’t a minor product iteration; it was the moment OpenAI detonated generative AI into the mainstream.
From a corporate‑governance perspective, this is the core charge: the board responsible for safeguarding the nonprofit’s mission was allegedly left in the dark on the single most consequential product move in the organization’s history.
Zilis framed this as the first serious internal red flag she raised about Altman’s leadership. The failure to notify the board about ChatGPT’s launch, she testified, “was the first concern she raised internally about Altman.”
In later testimony, she described these as broader “concerns about Altman that she raised with the board of OpenAI,” emphasizing that they were not abstract misgivings but specific episodes discussed at the board level.
Her account suggests a tipping point: what might have started as a disagreement over communication norms quickly hardened into a foundational question about trust and oversight—especially in a company that publicly brands itself as mission‑driven and safety‑oriented.
If ChatGPT was the governance shock, Helion was the financial and strategic curveball.
Zilis testified that “another concern she had about Altman related to OpenAI’s potential deal with Helion,” a nuclear energy company pursuing fusion. Sam Altman and OpenAI co‑founder Greg Brockman were both personal investors in Helion, and that dual role set off alarms.
According to the courtroom reporting, Zilis noted that “since the company didn’t have an official product yet,” OpenAI considering a major deal with Helion “felt super out of left field … How is it the case that we want to place [a] major bet on a speculative technology?”
In her summary of events, she reiterated that “the deal with Helion raised eyebrows because Altman and Brockman both had investments and the tech was still speculative.” That is precisely the sort of entanglement nonprofit‑style governance is designed to scrutinize: a mission‑oriented AI lab potentially channeling resources or strategic alignment toward a company in which its top executives hold personal stakes.
Zilis described the emotional weight of that moment bluntly, saying it was “probably the only time where I remember feeling in the pit of my stomach -- just being like, I voiced my concerns.”
In other words: this wasn’t routine boardroom friction. It felt, to her, like a line‑crossing risk that demanded she speak up.
From the board perspective Zilis sketches, these weren’t nitpicks about paperwork. They were fault lines running straight through OpenAI’s identity.
First, there’s the process breach: a “massive” global deployment of a transformative model with “no semblance of board communication.” For a nonprofit‑governed entity that publicly commits to careful rollout and alignment, that’s a serious charge.
Second, there’s the mission conflict: a proposed partnership with a highly speculative fusion company where the CEO and co‑founder have personal investments, raising both optics and potential fiduciary concerns.
Taken together, Zilis’ testimony paints a picture of a board struggling to assert its role while a charismatic, high‑velocity CEO pushed ahead on product and deal‑making.
Altman’s defenders—on and off the stand—are likely to see the same facts through a very different lens.
On ChatGPT, the argument practically writes itself: the product needed to be battle‑tested in the wild; the market and safety feedback loops demanded a fast release; bureaucratic drag could have ceded the future of AI to less cautious competitors. In this framing, the launch was bold but necessary, and the board’s role is to guide the mission, not micromanage shipping schedules.
On Helion, the pro‑Altman story centers on strategic synergy: advanced AI is energy‑hungry; if fusion pans out, it could underpin safe, scalable compute for AGI. Having visionary leaders with early bets in frontier energy could be framed not as a conflict, but as alignment of incentives—betting both reputational and financial capital on the same technological future.
That’s the clash at the center of this trial: is OpenAI a fast‑moving startup that occasionally outruns its paperwork, or a quasi‑public‑trust that must live and die by process, disclosure, and strict conflicts‑of‑interest hygiene?
Elon Musk, who helped found OpenAI before breaking with the company and later suing, has long argued that the lab drifted away from its original nonprofit, open‑science mission and into the orbit of Big Tech and private profit.
Zilis’ testimony about ChatGPT’s surprise debut and the Helion deal hands Musk’s side a narrative gift: even insiders, under oath, are now describing exactly the kind of governance breakdown and conflict‑laden decision‑making he has warned about.
The timing matters here. Zilis’ account doesn’t describe a sudden, late‑stage collapse; it traces a build‑up of unease from the moment OpenAI’s models went global. Musk’s lawyers can now point to a continuous thread: as OpenAI’s power and commercial stakes grew, so did internal doubts about how tightly Altman was being checked.
Whatever the legal outcome, Zilis has already supplied the trial with its most concrete, boardroom‑level critique of OpenAI’s leadership.
She has said, on the record, that:
Strip away the personalities and you’re left with a stark structural question that regulators, investors, and AI labs everywhere will have to answer: Who, exactly, gets to say “no” when the people building world‑shaping AI decide to move fast?
In the Musk v. Altman trial, that question is being litigated via emails, term sheets, and one board member’s uneasy recollection of a stomach‑level warning. Outside the courtroom, it’s about whether any board—or any governance framework at all—can keep up with the kind of power companies like OpenAI are racing to wield.