Sam Altman’s courtroom comeback tour has hit a hard snag: his own former chief technology officer, Mira Murati, now under oath, says she couldn’t trust his word and that OpenAI nearly imploded the moment he was fired.

From partnership optimism to internal distrust

Murati’s testimony, played by video in the Musk v. Altman trial, traces a dramatic arc from early technical collaboration to a crisis of confidence at the top of one of the world’s most powerful AI labs.

In her deposition, Murati described the early OpenAI–Microsoft relationship as focused on research rather than revenue. When GPT models were being developed, she said “there was no sense they would be commercializable,” underscoring that the initial collaboration was not framed as a race to monetize frontier AI at all costs.

That context matters, because it sets up the core allegation at the heart of her testimony: that as OpenAI grew into a commercial juggernaut, its CEO began to play fast and loose with internal processes meant to keep powerful models in check.

The safety board dispute: “No,” he wasn’t telling the truth

The most explosive detail in Murati’s testimony centers on a new AI model and whether it needed to go through OpenAI’s deployment safety board, the internal gatekeeper meant to vet risky releases.

Murati testified that Sam Altman told her OpenAI’s legal department had determined the model did not need to go through that board. Under questioning, she was asked plainly whether Altman was telling her the truth.

Her answer: “No.”

She described what happened next: she went to check with Jason Kwon, then OpenAI’s general counsel and now its chief strategy officer. What Kwon told her, she said, did not match what Altman had claimed. There was “misalignment” between their accounts, and she confirmed that “what Jason was saying and what Sam was saying were not the same thing.”

Faced with conflicting stories about a core safety process, Murati said she decided to err on the side of caution and ensure the model did go through the safety board anyway.

The implication is stark: in her telling, the CEO attempted to route around a safety mechanism, misrepresented legal advice to his own CTO, and was overruled only because she double‑checked and enforced the process herself.

A pattern, or a misunderstanding?

Murati didn’t stop at that single incident. She cast her criticism of Altman as “completely management related,” saying she “had an incredibly hard job to do in an organization that was very complex” and that she was “asking Sam to lead, and lead with clarity, and not undermine my ability to do my job.”

She agreed with prior characterizations of Altman as someone who pitted executives against one another and undermined his own leadership team. That echoes a 52‑page memo from co‑founder Ilya Sutskever, read into the record in another deposition, which alleged that Altman “exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another.”

Former board member Helen Toner had previously said on a 2024 podcast that OpenAI executives brought the board evidence of Altman “lying and being manipulative in different situations.” Murati aligned herself with that camp, reinforcing the narrative that the friction wasn’t about technical direction or mission drift, but about trust and governance.

From Altman’s side, the line has long been that the company’s internal turmoil stemmed from disagreements over strategy and speed, not dishonesty. But Murati’s sworn statement shifts that debate: either a web of misunderstandings somehow repeatedly painted Altman as evasive and manipulative to his closest colleagues and board — or there was, as his critics allege, a real pattern.

November 2023: Firing, free fall, and Microsoft steps in

The tension boiled over in November 2023, when OpenAI’s board abruptly fired Altman, accusing him of not being “consistently candid” and saying it had lost confidence in his ability to lead.

Murati’s testimony adds color from inside the blast radius. In a separate portion of the deposition, she recalled the days immediately after Altman’s ouster and delivered another headline line: “OpenAI was at catastrophic risk of falling apart” when he was fired.

That wasn’t hyperbole, in her view. Staff were in open revolt; key researchers and executives signaled they were prepared to walk. Murati told the court that she reached out to Microsoft’s Kevin Scott to inform him that Ilya Sutskever had signed a petition to reinstate Altman. The message to OpenAI’s biggest backer was clear: the people building the core technology wanted their CEO back.

At the same time, Microsoft CEO Satya Nadella met with the OpenAI board, a move widely understood as pressure from the company that had poured billions into OpenAI’s capped‑profit structure to stabilize its investment and workforce.

Murati herself sided with the reinstatement effort. “Murati also wanted Altman back,” The Verge reported from the deposition. She said bluntly: “The board had not followed a process that could be trusted and it wasn’t transparent with regard to firing Sam.”

Her position was paradoxical but precise: she questioned Altman’s internal candor and leadership style, yet believed the board’s opaque move to dump him was even more damaging and destabilizing than keeping him in place.

Inside the deposition room: a CTO in the middle

Murati’s video testimony, portions of which focused on the Microsoft partnership, underscored how she experienced the period as someone caught between a mission‑driven nonprofit charter, a hyper‑commercial reality, and two power centers: Altman and the board.

In describing the early OpenAI–Microsoft work, she emphasized again there was “no sense [the GPT models] would be commercializable” at first. Over time, of course, those models became the foundation of one of the fastest‑growing software products in history. That shift from research lab to platform provider heightened the stakes of every internal governance dispute — especially over safety.

Murati testified that Altman “undermined her in her ability to do her job, and pitted executives at OpenAI against each other.” Yet when the board finally moved against him, she concluded that the way they did it — sudden, secretive, and without a transparent process — put the entire enterprise at risk.

Her account suggests an institution where almost no one, at the most critical moment, trusted anyone else: not the board, not the CEO, and not the by‑the‑book safety processes that were supposed to structure the rollout of world‑shaping AI.

Musk v. Altman: what’s really on trial

Formally, Musk v. Altman is a dispute about whether OpenAI betrayed its founding commitments to openness and safety as it pivoted toward a proprietary, profit‑capped model with Microsoft as its anchor tenant. Murati’s testimony doesn’t resolve that question, but it sharpens the stakes.

On one side, her claims feed directly into Elon Musk’s narrative: that OpenAI’s leadership cut corners on safety and secrecy while chasing dominance in the AI race. Her account of a CEO allegedly misrepresenting whether a model needed safety‑board review, and of executives privately documenting patterns of lying and manipulation, is tailor‑made for that argument.

On the other side, her insistence that the board’s firing process was “not…trusted” and “wasn’t transparent,” and that the company was at “catastrophic risk of falling apart” without Altman, bolsters the defense claim that whatever his flaws, Altman was the glue holding a fragile, high‑velocity organization together.

If Musk wants to show that OpenAI drifted from its founding ideals, Murati gives him a story about eroded internal trust and contested safety protocols. If Altman wants to show that both employees and key partners ultimately chose him over a self‑destructing board, she gives him that too.

The bigger question: who governs frontier AI?

Stepping back, Murati’s account reads like a cautionary tale about the governance of frontier AI labs. A CTO believes she has to double‑check the CEO on a core safety procedure. A board that’s supposed to oversee existential‑risk research fires the CEO in a way a key insider describes as untrustworthy and opaque. A strategic partner with enormous leverage flies in to keep the company from disintegrating.

For now, the trial is about past promises and present power. But the testimony from one of OpenAI’s top former executives points to a broader, uncomfortable truth: the institutions currently steering AI’s future are still figuring out how to tell the truth to each other — and how to hold each other to account — even as they ship systems that the rest of the world is supposed to trust.