Human
“OpenAI was at catastrophic risk of falling apart” when Altman was fired, Murati says.
Live updates from Elon Musk and Sam Altman’s court battle over the future of OpenAI
3 days ago
OpenAI’s most dramatic weekend is back under the spotlight — this time in a courtroom, where former CTO Mira Murati is relitigating the November 2023 coup against CEO Sam Altman and her own role in both removing him and bringing him back.
Long before the board hit the red button on Altman, Murati says the core problem was basic honesty.
In video testimony played in the Musk v. Altman trial, Murati told the court that Altman lied to her about safety procedures for a new AI model. He allegedly assured her that OpenAI’s legal team had decided the model didn’t need to go through the company’s deployment safety board. Under oath, she flatly contradicted that: when asked if Altman was telling the truth, Murati answered, “No.”
According to her account, when she checked with Jason Kwon — then OpenAI’s general counsel — she found “misalignment” between what Altman said and what Kwon said, and she pushed to send the model through the safety review anyway. What Murati characterizes isn’t a philosophical rift about AI, but a management one: she testified that her criticism of Altman was “completely management related,” and that she had “an incredibly hard job” in a complex organization while asking Altman “to lead, and lead with clarity, and not undermine my ability to do my job.”
Murati also backed up a broader pattern that other insiders had already alleged. Co‑founder Ilya Sutskever told the board Altman showed “a consistent pattern of lying, undermining his execs, and pitting his execs against one another,” and former board member Helen Toner similarly described evidence of Altman “lying and being manipulative in different situations.” Murati said she agreed with those descriptions — namely that Altman pitted executives against each other and undermined her.
The board would eventually echo that theme in public when it fired Altman, saying he was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities” and that it no longer had confidence in his leadership.
Inside the company, all of this played out against a backdrop of deep technical collaboration — especially with Microsoft. In her deposition, Murati described how OpenAI and Microsoft had worked together on GPT models, and stressed that early on “there was no sense they would be commercializable.” That history matters: what began as pure research has since become one of the most lucrative AI commercialization efforts on the planet, massively raising the stakes of who controls OpenAI and how they behave.
The powder keg finally blew the week before Thanksgiving 2023 — “the AI industry’s biggest soap opera moment,” as one account put it. The OpenAI board abruptly announced Altman’s ouster via a terse, vague blog post on the company’s site, citing only that he was “not consistently candid.” The opacity sparked a wildfire of speculation and conspiracy theories across X and beyond, as the industry tried to reverse‑engineer what could have toppled the face of generative AI.
It is in this chaos that Murati’s role becomes both central and paradoxical. According to new reporting from her deposition, she was “instrumental in raising concerns that led to Altman’s termination,” and was briefly elevated to interim CEO as the board’s chosen internal successor. Throughout the turmoil, she appeared “everywhere at once”: interim chief, senior technical leader, and power broker in frantic back‑channel negotiations with Altman and Microsoft CEO Satya Nadella.
From the inside, Murati says the board’s move nearly blew the company apart. In one of the most striking lines from her testimony, she told the court that “OpenAI was at catastrophic risk of falling apart” when Altman was fired. Staff were in open revolt, Microsoft was hovering as a potential lifeboat (or acquirer in all but name), and the world was watching.
The board’s vague rationale didn’t just confuse the public; it detonated trust internally.
As word spread that Altman was out, other OpenAI executives and AI industry leaders quickly issued public statements of support for him. An online campaign erupted among employees: hundreds of staffers signaled loyalty by posting hearts and the phrase “OpenAI is nothing without its people,” a coordinated display aimed squarely at the board that had just fired their CEO.
On the corporate side, Microsoft’s role escalated from deep‑pocketed partner to de‑facto kingmaker. Nadella met personally with the OpenAI board in the days after the firing, while Murati was in direct contact with senior Microsoft executives including CTO Kevin Scott. She testified that she told Scott when Ilya Sutskever — the chief scientist who had sided with the board to remove Altman — signed a petition to reinstate him.
That was a crucial inflection point: if even Sutskever was defecting from the board’s position, its authority was collapsing.
Murati’s testimony makes one thing plain: whatever misgivings she had about Altman’s honesty, she rapidly turned against the board that used those concerns to remove him.
In court, she said she wanted Altman back. “The board had not followed a process that could be trusted and it wasn’t transparent with regard to firing Sam,” she testified. For Murati, the issue had flipped: the board’s own governance — opaque, unilateral, and destabilizing — became more dangerous than Altman’s alleged management sins.
Publicly, Murati threw her weight behind Altman’s reinstatement, joining the employee‑driven pressure campaign even as she briefly held the CEO title herself. But her stint as interim chief was over almost as soon as it began: the board moved to install Emmett Shear, the former Twitch CEO, as an outsider compromise choice.
That only hardened the backlash. Employees saw their preferred leader gone, their popular CTO demoted out of the top job, and a board that looked increasingly out of touch. Meanwhile, Altman and Nadella were in constant contact, exploring alternative futures — including the possibility of Altman decamping to Microsoft with a large chunk of OpenAI’s talent.
The endgame played out over mere days.
As staff signatures piled up on the pro‑Altman letter — with Sutskever’s about‑face one of the most symbolic — the board’s leverage vanished. Microsoft made clear it would back Altman. Murati, once the internal face of continuity after his firing, was now helping engineer his return through a blizzard of texts with Altman and Nadella.
Within days, the board capitulated. Altman was reinstated as CEO. Most of the directors who had voted to remove him were gone. OpenAI’s leadership structure was effectively rewritten to cement Altman’s position, while still paying lip service to the nonprofit charter and “benefit of humanity” language that had justified the board’s original hard line.
Murati’s role, revealed in more detail through her deposition, is a study in strategic whiplash. As one account summarized, “Mira Murati’s deposition pulled back the curtain on Sam Altman’s ouster,” showing her as both “instrumental in raising concerns that led to Altman’s termination” and a “key” player in his rapid reinstatement.
The Musk v. Altman case is formally about whether OpenAI betrayed its founding commitments and Elon Musk’s early‑backer expectations. But Murati’s testimony is turning the trial into something broader: a forensic examination of how the world’s most influential AI lab is actually run.
From one perspective — the board’s and Sutskever’s — Altman is the archetypal founder‑CEO who pushes too far, too fast, and plays too loose with the truth. Their language centers on a “pattern of lying,” manipulation, and a CEO who was “not consistently candid” with his own board.
From Murati’s perspective, the picture is messier. She clearly shares the concerns about Altman’s honesty and his habit of pitting executives against each other, enough to say under oath she “couldn’t trust [his] words.” But she also saw the board’s execution — secretive, process‑light, and wildly destabilizing — as a greater existential threat to OpenAI than Altman himself, driving her to help bring him back even after accusing him of lying.
And from the rank‑and‑file and Microsoft’s perspective, what mattered most was stability and momentum. Employees organized around Altman not because they loved governance nuance, but because they believed he was the person who could keep the company and its mission intact. Microsoft, with billions invested and its cloud business now fused to OpenAI’s models, needed the same thing.
The result is the power structure we see today: a stronger Altman, a chastened (and partially replaced) board, and a Microsoft partnership that looks less like a safety‑oriented counterweight and more like a strategic co‑pilot.
Murati’s video deposition — “we are seeing video testimony from Mira Murati’s deposition,” as one live‑blogger drily noted — doesn’t just rehash a boardroom drama. It crystallizes the core tension at the heart of frontier AI: the people building some of the most powerful systems in history are still arguing over something as basic as whether they can trust each other.