AI
xAI
xAI builds Grok, an AI chatbot with voice chat, image and video generation, real-time search, and advanced reasoning. Try Grok at grok.com.
3 days ago
Anthropic and Elon Musk spent months lobbing rhetorical grenades at each other. This week, they quietly signed the kind of deal that only happens when the AI arms race meets a hard wall: the world is running out of GPUs.
By early 2026, Anthropic’s biggest problem wasn’t demand — it was physics.
The Claude maker had seen “80x growth per year in revenue and usage” in Q1 2026, after planning for just 10x, CEO Dario Amodei told developers in late April, a surge that produced “difficulties with compute” and aggressive usage limits that infuriated power users. Anthropic’s own post was blunt: it needed more hardware to keep its fastest-growing customers happy.
On May 6, the company spelled out the stakes. “We’ve agreed to a partnership with SpaceX that will substantially increase our compute capacity,” Anthropic wrote, adding that, effective immediately, it was “doubling Claude Code’s five-hour rate limits” for Pro, Max, Team and Enterprise plans, removing peak-hour throttling, and “raising our API rate limits considerably for Claude Opus models.”
Behind those user-facing perks was a single line that explains everything about the new AI economy: “We’ve signed an agreement with SpaceX to use all of the compute capacity at their Colossus 1 data center. This gives us access to more than 300 megawatts of new capacity (over 220,000 NVIDIA GPUs) within the month.”
In other words, one lab’s shortage had met another empire’s surplus.
On the other side of the table sat Elon Musk’s AI stack. SpaceX had quietly become a hyperscaler in everything but name: its Memphis-based Colossus 1 supercluster, built “in record time,” was already “one of the world’s largest and fastest-deployed AI supercomputers,” featuring “over 220,000 NVIDIA GPUs, including dense deployments of H100, H200, and next-generation GB200 accelerators.”
xAI’s own announcement framed Colossus 1 as a frontier-scale machine for “large language models, multimodal systems, scientific simulations, and generative AI at frontier scale,” and made clear that Anthropic would use the extra horsepower “to directly improve capacity for Claude Pro and Claude Max subscribers.”
But here’s the twist: xAI had already moved its own training to Colossus 2. As TechCrunch put it, selling off Colossus 1 “turned xAI from a consumer to a provider of compute,” instantly monetizing an overbuilt asset and “likely worth billions of dollars.” In a sector where Google is literally leaving cloud money on the table because it is “capacity constrained,” Musk did the opposite — he sold the capacity.
That shift is why one TechCrunch headline asked the obvious question: “Is xAI a neocloud now?”
The deal became official on May 6, when Anthropic and SpaceX/xAI dropped matching announcements.
Axios captured the headline trade: “Anthropic said Wednesday it has struck a deal to gain access to compute capacity from Elon Musk’s SpaceX,” giving it “more than 300 megawatts of new capacity (over 220,000 Nvidia GPUs) within the month,” all of Colossus One’s power. Anthropic immediately “lift[ed] the five-hour rate caps for most paid subscribers,” scrapped peak-hour reductions on Claude Code for Pro and Max, and raised API limits for its most advanced Opus models.
The Verge translated that into user reality: Anthropic is “doubling five-hour rate limits for many Claude Code users, removing Claude Code’s peak hours limit reduction, and significantly increasing API rate limits for Claude Opus models,” and “credits the capacity to a new deal with SpaceX ‘to use all of the compute capacity at their Colossus 1 data center’ in Memphis.”
Business Insider added more color from Anthropic’s dev conference in San Francisco, where chief product officer Ami Vora “announced the blockbuster deal,” saying Anthropic would use SpaceX’s Memphis-based Colossus One — “one of the world’s largest” clusters of Nvidia chips — to meet surging demand for Claude Code. The company “expects the deal to deliver more than 300 megawatts of computing capacity within the month, across more than 220,000 Nvidia GPUs,” it reported.
For users, the throughline was simple: the minute Anthropic plugged into Musk’s GPU firehose, the usage meter loosened.
Ars Technica summarized the consumer-facing side in its matter-of-fact headline: “Anthropic raises Claude Code usage limits, credits new deal with SpaceX,” noting that the arrangement follows big-capacity tie-ups with Microsoft and Amazon. The Next Web likewise highlighted that Claude Code’s five-hour rate limits would double and “peak-hours throttling” would be removed for Pro and Max, with the capacity “behind the change” coming from Anthropic’s agreement “to take all of SpaceX’s Colossus 1 data centre.”
AI Magazine went even bigger, calling it a “landmark agreement” that gives Anthropic access to Colossus 1, “one of the world’s largest and fastest-deployed AI supercomputers,” and quoting the company saying the deal will “substantially increase” its compute capacity.
The strangest part of this story isn’t the scale — it’s the speed of the volte-face.
In February, Musk had publicly derided Anthropic’s flagship Claude model as “misanthropic and evil.” By early May, Axios could plausibly run a feature titled, with some relish, “How Elon grew to love Anthropic,” noting that Musk had gone from calling the lab “evil” to doing business with it “in three months.” The explanation? Business necessity in a “severe compute deficit” world, and an opportunity to “stick it to his archrival, OpenAI CEO Sam Altman.”
Business Insider reported that Musk spent “a lot of time last week with Anthropic leaders” and came away charmed: “Everyone I met was highly competent and cared a great deal about doing the right thing. No one set off my evil detector,” he wrote on X, according to the outlet.
AI Magazine quoted a similar line from Musk: “I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed,” he wrote, adding that after that, he was “ok leasing Colossus 1 to Anthropic, as SpaceXAI had already moved training to Colossus 2.”
Axios, ever the political psychologist, framed the alliance in more cynical terms via tech analyst Ben Pouladian: “Elon’s enemy is Sam. Dario’s enemy is Sam. Enemy of my enemy is a compute partner.”
On X, Musk mostly spoke through retweets. He amplified one post boasting that “xAI and SpaceXAI have just made Colossus 1 available to Anthropic to support Claude,” meaning “more than 220,000 NVIDIA GPUs in one of the world’s largest and fastest-built AI superclusters are now helping to improve Claude’s user experience, code limits, and API capacity.”
He also boosted a fan’s framing of his grand strategy: “Elon Musk is playing a long game most people don’t think about… Not just improving life on Earth, but increasing how much energy humanity can use and control over time. Tesla, SpaceX and Starlink all fit into that direction.”
The subtext is clear: for Musk, leasing out Colossus 1 is just another move in a Kardashev-scale project to industrialize intelligence and energy.
If Musk’s story is about monetizing surplus, Anthropic’s is about diversifying dependencies.
In its own blog, the company slotted SpaceX alongside a who’s-who of hyperscalers. The Colossus 1 agreement “joins our other significant compute announcements,” the lab wrote, listing “an up to 5 gigawatt (GW) agreement with Amazon,” a separate “5 GW agreement with Google and Broadcom,” a strategic partnership with Microsoft and NVIDIA that includes “$30 billion of Azure capacity,” and a “$50 billion investment in American AI infrastructure with Fluidstack.”
AI Magazine traced the same arc, noting that in addition to the SpaceX deal, Anthropic’s 5GW agreement with Amazon, a “similar capacity deal with Google and Broadcom,” and the Microsoft/NVIDIA partnership all “contributed to these increased Claude usage limits.”
Anthropic’s line is that such geographic and provider diversity is now table stakes for serious enterprise AI. “Our enterprise customers—particularly those in regulated industries like financial services, healthcare, and government—increasingly need in-region infrastructure to meet compliance and data residency requirements,” the company wrote, pitching its expansion as focused on “democratic countries with secure legal and regulatory frameworks,” and pledging to “invest back into the communities that host our data centers.”
In that framing, Musk’s GPUs aren’t a bet on a single mercurial billionaire, but just another leg in a global stool.
TechCrunch’s “neocloud” label isn’t just wordplay. The Anthropic deal “immediately monetized one of the company’s most impressive accomplishments,” it argued, suggesting that “the company’s real business may be more about building data centers than training AI models.”
Axios extended that logic to public markets, pointing out that the deal lets Musk “turn unused compute into revenue before an expected SpaceX IPO next month,” while fueling SpaceX’s ambition to become an “AI powerhouse” ahead of what is billed as the largest IPO in corporate history.
Meanwhile, Business Insider noted that Anthropic isn’t SpaceX’s first customer: Cursor, a hot coding startup, is already training its latest model on xAI’s GPUs, part of a compute-sales business that lets SpaceX and xAI “generate revenue from their own infrastructure, while also still developing AI models.”
The message to investors is unmistakable: if you can’t beat the AI labs, sell them power.
Both sides are also dangling something much weirder than Memphis: data centers in space.
SpaceXAI’s announcement was explicit: “As part of this agreement, Anthropic also expressed interest in partnering to develop multiple gigawatts of orbital AI compute capacity,” it said, arguing that “the compute required to train and operate the next generation of these systems is outpacing what terrestrial power, land, and cooling can deliver on the timelines that matter.”
Anthropic echoed that “we have also expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity,” positioning space-based clusters as a way to tap “near-limitless sustainable power with less impact on Earth,” if the engineering can be made to work.
Musk, unsurprisingly, turned that line into social-media spectacle by retweeting xAI’s boast that “SpaceXAI and @AnthropicAI have also expressed interest in partnering to develop multiple gigawatts of orbital AI compute capacity.”
AI Magazine framed this as “Orbital AI Compute Moves Closer to Reality,” casting the SpaceX–Anthropic axis as one of the few plausible paths to space-based supercomputers.
Step back, and the contradictions pile up fast.
Musk is suing OpenAI while underwriting OpenAI’s fiercest rival. Anthropic is marketing itself as a safety-first, multi-stakeholder AI lab while tethering a big chunk of its near-term capacity to a single, politically volatile supplier. SpaceX is turning its own AI lab into a kind of cloud utility at the exact moment when every other tech giant is hoarding GPUs for itself.
Axios captured that paradox in a single line: Musk’s Anthropic deal “helps him accomplish two things at once: turn unused compute into revenue… and stick it to his archrival, OpenAI CEO Sam Altman.”
For now, developers don’t care. Their Claude prompts run faster, their rate limits are higher, and the GPU famine has eased — at least until the next model launch.
But buried inside this partnership is a preview of AI’s next phase: labs as power utilities, rockets as data center trucks, and rival CEOs discovering that in an era of gigawatt-scale compute, even “enemies” eventually share the same grid.