Helen Toner’s testimony in Elon Musk’s lawsuit against Sam Altman has turned a hazy Silicon Valley palace coup into a slow-motion autopsy — exposing a board that didn’t trust its CEO, a CEO who didn’t fully trust his board, and a governance structure that, at the very moment AI went mainstream, seemed barely in control of its own creation.

Early warning signs: a board in the dark

Well before the November 2023 firing of Sam Altman detonated in public, Toner says the board was already flying half-blind.

In her video deposition, she recalls discovering OpenAI’s breakout product the same way the rest of the world did: on social media. “Toner says she found out about ChatGPT by seeing screenshots on Twitter.” That wasn’t a one-off oversight, she implied, but part of a pattern. She “wasn’t surprised she hadn’t been told, though, because ‘I was used to the board not being very informed about things.’”

That chronic information gap became, in her mind, an indictment of Altman’s approach to oversight. The lack of communication “caused me to believe that [Altman] was not motivated to help the board perform the oversight role.”

By the time her deposition was played to the jury — “We are now looking at Helen Toner’s deposition. This should be about an hour,” one live account noted — the judge was already warning jurors she’d stop the video and make them stretch if they started nodding off. The governance story, though, was anything but dull.

Inside the coup: Sutskever lights the fuse

The formal move against Altman, Toner testified, began not with a single scandal but with a colleague’s unease. “Toner is relating how Sam Altman’s firing happened.” According to her, the “starting point was Sutskever reaching out to have a conversation where he expressed serious concerns about Altman.”

It wasn’t one lie, one blown call, or one botched product launch. It was a “pattern of behavior” that included issues with “honesty and candor” that led to the firing, “not any one action.” Toner has described this before, including in a 2024 podcast; her account in court “is similar to Murati’s testimony,” the live coverage noted.

Those honesty concerns crystallized around several flashpoints:

  • The startup fund – “We are going through the removal of Sam Altman from OpenAI in detail,” one report summarized. “It was primarily because Altman was not entirely candid with the board about his interests in an OpenAI startup fund.”
  • The Toner paper drama – There was “some drama about Toner’s paper, which Altman told Sutskever that another board member suggested Toner resign from the board. That board member said she’d never said it.” In other words, Altman appeared to be playing board members off one another with statements at least one of them flatly disputes.
  • Accumulating complaints – “Further, Mira Murati and Sutskever also mentioned problems. And, of course, the lack of disclosure of ChatGPT...”

In Toner’s telling, this wasn’t a sudden moral panic about AI, nor a narrow policy dispute, but a basic breakdown of trust between a CEO and his board.

How the board pulled the trigger

When Toner “started talking about the board’s decision-making process,” the mechanics of Altman’s removal looked less like a carefully orchestrated governance exercise and more like a fire drill.

“Neither Altman or Brockman had been allowed to tell their side of the story, nor were their HR files pulled by the board,” the liveblog reported from her testimony. In other words, the board ousted both the CEO and the president of a company at the center of the AI boom without:

  • formally reviewing their personnel records, or
  • allowing them to present a defense before the decision.

Equally striking: “There was no input from Microsoft, or any other investors or customers.” Microsoft, OpenAI’s biggest backer and infrastructure lifeline, learned about the board’s move roughly when the rest of the world did.

If that sounds like a legal minefield, at least one close observer agrees. “The main thing I am taking away from McCauley’s and Toner’s testimony is that the board got really bad advice from whatever lawyers they consulted on the firing Altman thing. I mean, I hope they consulted lawyers. I don’t think that’s come up in the testimony,” one account of the day’s proceedings noted.

That line captures the core paradox of Toner’s appearance: a board acting out of lofty fiduciary concern about honesty and safety, using a process so rushed and insular that it may have undermined its own case.

The safety backdrop: alchemy at the frontier

Hovering over all of this is the question Musk’s lawsuit is really about: is OpenAI still the mission-driven research lab he helped found, or a turbocharged commercial juggernaut driving into the unknown with flimsy brakes?

On that front, Toner’s testimony was hardly reassuring for anyone craving the comfort of hard science. “Making AI models is ‘more like alchemy than chemistry,’ Toner says.” In practice, that means “there’s no clear-cut way to test for safety. People are just throwing things together to see what happens.”

She did allow that OpenAI’s internal safety processes were at least maturing. She referred to the company’s safety board’s methods as becoming “somewhat less slapdash” over time. But even that faint praise underscores how experimental — and improvisational — the whole field still is, at precisely the moment AI systems are being integrated into billions of users’ lives.

In Musk’s framing, this kind of testimony bolsters the argument that OpenAI’s mission and structure matter more than ever: if AI is closer to alchemy than engineering, you’d better trust the people mixing the potions.

The trial theater: video, boredom, and backstory

The jury, meanwhile, is encountering this drama at a distance, mostly via video. “We are now looking at Helen Toner’s deposition,” one real-time dispatch opened, noting that “this should be about an hour.” U.S. District Judge Yvonne Gonzalez Rogers, anticipating the numbing effect of deposition marathons, “told the jury that if she sees them falling asleep, she’s going stop the video and have them stand and stretch.”

Before Toner’s appearance, the trial had already trudged through the video deposition of Tasha McCauley, another former board member. “You may wonder: are we still listening to the video deposition of Tasha McCauley?” a separate update joked, before landing the more serious punchline: her and Toner’s testimony together “suggests the OpenAI board received poor legal advice regarding the firing of Sam Altman.”

In the background of all this is the central civil war: Musk, who cast himself as a founding conscience of OpenAI, versus Altman, who turned it into the face of the AI gold rush. The Toner testimony doesn’t settle that dispute, but it gives both sides ammunition.

Competing narratives: who was reckless?

From one perspective — roughly aligned with Musk’s — Toner’s account is damning for Altman. A CEO who doesn’t tell his board about a flagship product launch, isn’t “entirely candid” about his financial interests in a related startup fund, and allegedly misrepresents board colleagues’ views looks like someone gaming a non-profit-style structure while chasing commercial dominance.

From another angle — closer to the view emerging from Altman’s allies — the board itself looks reckless: a small, ideologically charged group that tolerated information gaps for years, then moved suddenly to decapitate the company without basic procedural safeguards, input from key partners, or a clear legal game plan.

Toner’s own narrative sits uncomfortably between these poles. She portrays a board genuinely alarmed by “a pattern of behavior” around honesty and candor, while simultaneously revealing a process so insular that it verged on self-sabotage. The fact that she first saw ChatGPT on Twitter is almost too on the nose: a board nominally steering the future of AI, learning about its own breakthrough from the feed like everyone else.

What Toner’s testimony leaves behind

Chronologically, Toner’s deposition walks jurors through:

  1. The information vacuum – a board “not very informed about things,” including ChatGPT itself.
  2. The safety fog – AI development as “more like alchemy than chemistry,” with safety practices only “somewhat less slapdash” over time.
  3. The Sutskever alarm – concerns about Altman’s honesty and candor, joined by Mira Murati’s own complaints.
  4. The flashpoints – the startup fund, the misrepresented resignation comment, the secretive ChatGPT launch.
  5. The board’s snap decision – no HR review, no chance for Altman or Brockman to present their side, no consultation with Microsoft or other stakeholders.
  6. The legal hangover – outside observers concluding the board likely got “really bad advice” on how to execute its own coup.

In a trial that hinges on whether OpenAI betrayed its founding ideals or merely evolved past its founding benefactor, Toner doesn’t offer an easy hero. What her testimony does make clear is that at the precise moment AI crossed into global consciousness, the organization at its center was making world-shaping decisions with the informality of a start-up and the opacity of a secret society.

In that sense, both Musk and Altman have something to answer for — and Toner’s cool, slightly exasperated video presence, beamed into a courtroom where the judge periodically threatens to make everyone stretch, may be the closest thing this saga has to a conscience.