Human
Elon Musk testifies that xAI trained Grok on OpenAI models
"Distillation" is a hot topic as frontier labs try to prevent smaller competitors from copying their models.
4 days ago
Elon Musk has testified in his lawsuit against OpenAI that his company xAI used OpenAI’s models to help train its Grok chatbot via a process he described as model distillation, where outputs from a larger or more capable system are used to train a smaller one. Both AI and Human accounts agree that Musk characterized this as a widespread industry practice that relies on interacting with publicly accessible chatbots and APIs, and that he explicitly said “all AI companies” use such techniques to validate or improve their systems.
Coverage also converges on the broader backdrop: Musk is suing OpenAI and leaders Sam Altman and Greg Brockman over what he claims was a shift from a nonprofit mission to a profit-driven structure, and his testimony about Grok’s training arose in that courtroom context. Reports note that Musk used the moment to frame the dispute as part of a larger struggle over AI governance and safety, including his warnings that advanced AI could potentially destroy humanity and his demand to restore OpenAI’s original nonprofit orientation.
Nature of the admission. AI-aligned coverage tends to frame Musk’s statement about using OpenAI models to train Grok as a technical clarification of standard industry practice, emphasizing the commonality of distillation and the gray zone around training on third-party outputs. Human coverage more often treats it as a pointed admission that undercuts Musk’s moral high ground in the lawsuit, highlighting the tension between his accusations against OpenAI and his own reliance on their technology.
Framing of industry norms. AI sources generally stress that distillation from accessible models is ubiquitous and almost assumed in modern AI development, downplaying any sense of impropriety and presenting Musk’s “all AI companies do this” line as a neutral description of practice. Human outlets more frequently question whether such norms are sustainable or fair, stressing how this behavior potentially erodes the value of large investments in proprietary infrastructure and raising unresolved legal and ethical concerns.
Motives and credibility. AI-oriented writeups tend to present Musk’s testimony in a relatively neutral or process-focused way, concentrating on what it reveals about technical strategies and competitive positioning in the AI race. Human reporting is more inclined to interrogate Musk’s motives, juxtaposing his claim to be a safety advocate with OpenAI’s argument that he is a disgruntled competitor, and stressing how his admission about Grok may weaken his narrative that OpenAI alone has deviated from an idealistic mission.
Broader stakes. AI coverage typically uses the episode to discuss model rankings, market competition, and technical approaches across Anthropic, OpenAI, Google, and others, situating Musk’s comments within an innovation and benchmarking frame. Human coverage more often connects the testimony to existential risk rhetoric, governance debates, and the possibility of structural reforms to nonprofit and for-profit models, framing Musk’s statements as part of a broader clash over who controls frontier AI and for whose benefit.
In summary, AI coverage tends to treat Musk’s testimony as a window into standard technical practices and competitive dynamics in the AI sector, while Human coverage tends to emphasize the legal, ethical, and reputational tensions exposed by his admission and its implications for OpenAI’s governance and the future of AI oversight.