Sam Altman may be back in charge at OpenAI, but in a San Francisco courtroom this week, the story of how he briefly lost that job is being replayed in excruciating detail — and it doesn’t make anyone at the top of the AI boom look particularly in control.

The stage: Musk v. Altman, and a pivotal deposition

The latest act in Elon Musk’s lawsuit over the future and direction of OpenAI unfolded through a video screen: the recorded deposition of former OpenAI board member Helen Toner. The court settled in for about an hour of testimony as the judge warned jurors she’d pause the tape and make them stand and stretch if they started to nod off — a wry acknowledgment that corporate governance drama can be as sleepy as it is consequential.

Toner’s appearance was billed as a key moment in the trial, which pits Musk’s narrative — that OpenAI betrayed its founding ideals and him personally — against Altman’s defense that the company evolved legitimately and lawfully. Her account goes to the heart of a central question: did the board have real, substantive reasons to fire Sam Altman in November 2023, or did it panic and overreach?

Early tensions: a board in the dark

Toner’s story doesn’t start with a single smoking gun but with an accumulated sense that the OpenAI board was flying blind. She recounted that she first learned about ChatGPT — the product that turned OpenAI from a research lab into a household name — not in a board briefing, but by stumbling across screenshots on Twitter.

She added that she “wasn’t surprised she hadn’t been told,” explaining that she was already “used to the board not being very informed about things.” That chronic information gap had a deeper implication: it “caused [her] to believe that [Altman] was not motivated to help the board perform the oversight role.”

In a case that turns heavily on whether OpenAI’s leaders lived up to their fiduciary and ethical obligations, that’s a brutal assessment. Boards are supposed to see around corners; Toner is essentially saying they couldn’t even see the front door.

The Sutskever warning: “serious concerns” about Altman

The tipping point, Toner testified, came from inside the house. She described how chief scientist and co-founder Ilya Sutskever approached her to talk about Altman, setting off the chain of events that would culminate in the CEO’s ouster.

According to Toner, Sutskever “reached out to have a conversation where he expressed serious concerns about Altman.” Those concerns, she stressed, were not about a one-off misstep but a “pattern of behavior” involving problems with “honesty and candor.”

In her telling, this was not a board coup hunting for a pretext. It was the board’s top researcher flagging what he saw as systemic issues with the CEO’s truthfulness — and doing so in terms that made continued trust difficult to sustain. Toner’s account, as tech press observers have noted, is consistent with what she had already laid out publicly in a 2024 podcast and “similar to [CTO Mira] Murati’s testimony,” suggesting a shared internal narrative rather than a convenient post-hoc rewrite.

Pulling the thread: money, messaging, and mixed signals

Once Sutskever raised the alarm, the board began to dig. In court, Toner’s deposition walked through “the removal of Sam Altman from OpenAI in detail,” laying out multiple strands that, in the board’s eyes, braided into a single integrity problem.

The first strand was financial transparency. Toner said the firing was “primarily because Altman was not entirely candid with the board about his interests in an OpenAI startup fund.” For a company that had famously wrapped itself in a nonprofit charter and lofty talk of AI for the benefit of humanity, undisclosed personal stakes in a related fund cut against the brand — and, more importantly, against basic governance norms.

The second strand was what you might call “narrative manipulation.” Toner described “some drama about [her] paper,” explaining that Altman had told Sutskever another board member had suggested she resign over it — a claim that board member flatly denied, saying she’d “never said it.” In Toner’s framing, this wasn’t just interpersonal gossip; it was evidence that Altman was triangulating board members against one another and misrepresenting conversations.

Add to that the mounting concerns voiced by other senior leaders: “Mira Murati and Sutskever also mentioned problems,” she said, reinforcing that this was now a chorus, not a solo complaint.

Finally, there was the now-infamous issue of disclosure around ChatGPT itself — the product that Toner had to discover from social media. The “lack of disclosure of ChatGPT,” she reiterated, fit the same pattern of sidelining the board from major product decisions.

In Toner’s chronology, these incidents didn’t sit in isolation. Each reinforced the others, building a portrait of a CEO who treated the board less as a governing body and more as a public relations obstacle.

The board’s breaking point: from doubts to dismissal

By November 2023, these concerns came to a head. Toner’s deposition, as relayed in court coverage, underscores that the board’s decision to fire Altman was not anchored to a single catastrophic event but to an accumulation of trust breaches: “a pattern of behavior” and “issues with ‘honesty and candor’” that they no longer felt they could manage within the status quo.

In that light, the move to oust Altman looks, from the board’s side of the table, less like a coup and more like a defensive maneuver — an attempt by a relatively weak board to reassert its authority over a wildly powerful, wildly visible CEO who no longer felt obliged to keep them in the loop.

The Musk narrative: from founding ideals to courtroom clash

Overlaying all of this is Elon Musk’s own argument in the trial: that OpenAI, which he helped found as a nonprofit dedicated to open research, has morphed into a closed, profit-driven juggernaut aligned with Microsoft and beholden to Altman’s ambitions. Toner’s testimony, while not directly about Musk, inadvertently feeds both sides of this fight.

On the one hand, her claims that she learned about ChatGPT via Twitter and that the board was “not very informed about things” bolster Musk’s critique that OpenAI’s internal checks and balances were a façade — that real power was concentrated in a small inner circle around Altman, with the board relegated to bystander status.

On the other hand, the fact that the board ultimately moved to fire Altman — and that it did so citing concerns about honesty, oversight, and conflicts of interest — serves as evidence that OpenAI’s governance wasn’t entirely hollow. When pushed to the brink, the board did act, however messily.

Altman’s camp and the missing counterpunch

In this particular slice of the trial record, Altman’s direct voice is largely absent; what we get is Toner’s reconstruction and the live-blog framing around it. The Altman side of the story, as presented publicly in the aftermath of his brief ouster and rapid reinstatement, emphasizes that employees, partners, and investors rallied to him — and that the board that fired him was ultimately reshaped.

That counter-narrative implicitly argues that Toner and her allies misread normal founder aggression and strategic opacity as fatal character flaws — or, more bluntly, that they were out of their depth in overseeing a hyper-growth AI company. From this vantage point, the fact that Toner discovered ChatGPT via Twitter sounds less like a governance scandal and more like a board that was simply too slow and distant from the product to keep up.

But in the courtroom, what matters is not who won the internal power struggle in 2023 — we already know Altman returned — but whether, in the years leading up to that crisis, OpenAI’s leaders met the obligations they had to founders, funders, and the public mission they loudly claimed.

What Toner’s testimony really exposes

Chronologically, Toner’s story sketches a clear arc:

  1. Early operations: A board increasingly “used to” being poorly informed about major product launches and strategic moves.
  2. Escalating concerns: Sutskever and Murati raise “serious” and repeated issues about Altman’s honesty and behavior.
  3. Discovery of conflicts: The board uncovers that Altman “was not entirely candid” about his stake in an OpenAI startup fund.
  4. Internal misrepresentations: Altman is accused of mischaracterizing a board member’s position on Toner’s own paper, deepening mistrust.
  5. November 2023 ouster: The board, citing this “pattern of behavior,” moves to remove Altman as CEO.
  6. 2026 courtroom replay: In Musk v. Altman, Toner retells this trajectory under oath, her deposition now a central exhibit in a broader fight over what OpenAI has become.

The punchline is less about who was right in any single dispute and more about the system itself. If a board member can discover the company’s defining product on social media, and if the chief scientist feels he has to personally raise red flags about the CEO’s honesty, then OpenAI’s governance was already in trouble long before the world learned the name “ChatGPT.”

Whatever the jury decides in Musk v. Altman, Helen Toner’s testimony has already delivered one verdict: the AI revolution’s most powerful lab was, at crucial moments, run with less transparency than the average startup.