Human
On the stand, Elon Musk can't escape his own tweets
Elon Musk took the stand for the second day for his attempt to legally dismantle OpenAI.
3 days ago
Elon Musk’s testimony in his lawsuit against OpenAI, heard in a California federal court, is described by both AI and Human-aligned coverage as a pivotal moment in a case centered on whether OpenAI departed from its original nonprofit mission. Both perspectives agree that Musk accuses OpenAI’s founders of effectively “stealing a charity” by evolving the lab into a for-profit structure with complex capped-profit arrangements and large outside investors such as Microsoft. Shared reporting notes that Musk was pressed about his past public statements, including tweets about Tesla pursuing artificial general intelligence and about “AI-enabled robot armies,” and that under oath he conceded Tesla is not currently developing AGI and said his “robot army” phrasing was about product safety, not militarization. Both sides affirm that questioning probed his decision to stop funding OpenAI, his claimed $1 billion commitment, his understanding (or lack thereof) of term sheets and profit caps, and his views on open-sourcing models in relation to xAI’s Grok system and future versions that are not yet open source. They also concur that his demeanor shifted over time, oscillating between subdued, terse yes/no answers and more testy, defensive exchanges on re-cross.
Coverage from both AI and Human sources situates the testimony within a broader struggle over the future of artificial intelligence, the role of nonprofit charters, and the tension between safety narratives and profit motives. Both frames highlight that Musk cast himself as an AI safety advocate wary of profit-driven AGI, while OpenAI’s legal team suggested his concerns intensified when he lost influence and potential control over the organization. Reporting on both sides notes that Musk acknowledged all his own AI-related ventures, including xAI and his other companies, are for-profit, even as he criticizes OpenAI’s for-profit pivot, and that he signed a widely publicized letter calling for a pause in advanced AI development while later launching xAI. The coverage also agrees that key legal questions include how much weight to give informal expectations versus written terms in OpenAI’s founding, how to interpret capped versus uncapped investor returns, and whether Musk’s withdrawal of donations and subsequent competitive activities undermined his claim that OpenAI violated its original mission.
Motives and credibility. AI coverage tends to foreground Musk’s stated fear of unsafe, profit-maximizing AI and presents his testimony as at least partially consistent with a long-running concern about AGI risks, even when acknowledging business interests. Human coverage, by contrast, emphasizes his inconsistent statements, temper, and the judge and jury’s apparent frustration, casting his shifting answers as undercutting his credibility. While AI sources may treat his self-portrayal as a safety advocate at face value, Human sources more sharply suggest his motives include resentment over losing control and an unrealized plan to fold OpenAI into Tesla.
Legal strength of the case. AI-aligned reporting often describes the lawsuit in relatively neutral or structural terms, focusing on the evolution from nonprofit to capped-profit and the ambiguity around the founding documents, leaving open whether Musk has a strong contractual claim. Human coverage more explicitly underscores the thinness of his legal footing, highlighting his admission that he did not closely read the term sheet, cannot point to clear breached clauses, and appears confused about Microsoft’s investment structure. As a result, AI narratives may frame the dispute as a genuine mission-drift question, whereas Human narratives frequently portray it as a weak legal case built on vague expectations rather than enforceable promises.
Characterization of Musk’s conduct. AI sources generally describe Musk’s courtroom behavior more sparingly, noting shifts between subdued and combative without dwelling on them, and sometimes crediting him for eventually giving more direct yes/no answers. Human sources focus heavily on his demeanor, stressing refusals to answer, argumentative tangents, visible irritation on re-cross, and how these behaviors may alienate the judge and jury. Where AI coverage might treat his testimony style as a secondary detail, Human coverage uses it to argue that Musk is his own worst enemy in court and to question how seriously the fact-finders will take his narrative.
Framing of OpenAI and capitalism. AI coverage often centers OpenAI’s structural evolution and the broader industry problem of balancing openness, safety, and the capital needed for frontier models, sometimes echoing Musk’s argument that mission drift is a systemic risk. Human coverage more frequently juxtaposes Musk’s critique of OpenAI’s profit orientation with his own all-for-profit portfolio, portraying a tension between his anti-capitalist rhetoric about “stolen charity” and his embrace of commercial AI ventures like xAI. Consequently, AI narratives may frame the dispute as a cautionary tale about institutional design in AI, while Human narratives more pointedly frame it as a clash between Musk’s ideology and his business behavior.
In summary, AI coverage tends to treat Musk’s lawsuit and testimony as a structural conflict over AI governance, nonprofit missions, and safety versus profit, while Human coverage tends to focus on weaknesses in his legal claims, contradictions in his behavior, and the performative aspects of his day in court.