Anthropic’s Claude chatbot just got a massive power-up — and it’s running on Elon Musk’s machines. A safety-obsessed OpenAI rival is now tethered to the world’s most polarizing tech billionaire for the thing that matters most in modern AI: GPUs.

The crunch before the deal

By early 2026, Anthropic was hitting the wall on compute. The company told developers it had seen “80x growth per year in revenue and usage” in Q1 — when it had only planned for 10x — leading to “difficulties with compute” and aggressive usage caps that infuriated power users.

That crunch collided with Anthropic’s strategy of sourcing capacity from every hyperscaler willing to sell it. The lab had already announced an up to 5 GW agreement with Amazon, a 5 GW deal with Google and Broadcom, a strategic partnership with Microsoft and NVIDIA that included $30 billion of Azure capacity, and a $50 billion investment in U.S. AI infrastructure with Fluidstack. Even with all that, Claude was compute-starved.

Meanwhile, Musk’s newly consolidated AI ambitions — xAI folded into SpaceX, running on a sprawling data-center buildout branded Colossus — had their own problem: too many chips and not enough customers. Grok’s user numbers had slumped after high-profile image-generation mishaps, leaving xAI “much more than what Grok needs to operate.”

SpaceX/xAI had already moved training to a newer supercomputer, Colossus 2, leaving Colossus 1 — a 300MW Memphis facility — largely surplus to internal needs. To some, that looked less like a triumphant AI product story and more like an overbuilt infrastructure play in search of revenue.

May 6: Anthropic and SpaceX go public

On May 6, Anthropic went first. In a corporate blog post titled, with characteristic understatement, “Higher usage limits for Claude and a compute deal with SpaceX,” the company revealed it had “signed an agreement with SpaceX to use all of the compute capacity at their Colossus 1 data center,” giving it “access to more than 300 megawatts of new capacity (over 220,000 NVIDIA GPUs) within the month.”

Anthropic said the deal would “substantially increase” its compute capacity and immediately translate into better service: doubling Claude Code’s five‑hour rate limits for Pro, Max, Team and seat-based Enterprise plans, removing peak‑hours throttling for Pro and Max, and “raising our API rate limits considerably for Claude Opus models.”

Axios captured the stakes bluntly: Anthropic was “struggling to meet the needs of developers,” and the SpaceX deal gives it “access to more than 300 megawatts of new capacity (over 220,000 Nvidia GPUs) within the month,” while allowing the firm to lift five‑hour rate caps for most paid subscribers and raise Opus API limits.

The Verge put it in user terms: Anthropic is “doubling five-hour rate limits for many Claude Code users, removing Claude Code’s peak hours limit reduction, and significantly increasing API rate limits for Claude Opus models, starting today,” crediting “a new deal with SpaceX ‘to use all of the compute capacity at their Colossus 1 data center’ in Memphis.”

A few hours later, xAI published its own framing. In a brief announcement, it said SpaceXAI had “signed an agreement with Anthropic to provide access to Colossus 1, one of the world’s largest and fastest-deployed AI supercomputers,” built “from the ground up in record time” and featuring “over 220,000 NVIDIA GPUs, including dense deployments of H100, H200, and next-generation GB200 accelerators.” Anthropic, xAI added, would use this “to directly improve capacity for Claude Pro and Claude Max subscribers.”

Musk’s victory lap — and a pivot

Musk didn’t stay quiet. On X, he boosted a post from @tetsuoai announcing that “xAI and SpaceXAI have just made Colossus 1 available to Anthropic to support Claude,” meaning “more than 220,000 NVIDIA GPUs in one of the world’s largest and fastest-built AI superclusters are now helping to improve Claude’s user experience, code limits, and API capacity.”

For Musk-world observers, the deal was more than a footnote. Axios argued the arrangement lets him do “two things at once: turn unused compute into revenue before an expected SpaceX IPO next month — and stick it to his archrival, OpenAI CEO Sam Altman.” As analyst Ben Pouladian put it in a widely circulated X post Musk tacitly endorsed: “Elon’s enemy is Sam. Dario’s enemy is Sam. Enemy of my enemy is a compute partner.”

It also marks a whiplash-inducing change of heart. In February, Musk had called Anthropic “misanthropic and evil” on X over its sky‑high valuation; three months later, Axios notes, “Musk went from calling Anthropic ‘evil’ to doing business with it,” underscoring how “quickly competition can give way to strategic necessity in the AI race.”

Business Insider, which first detailed Musk’s about‑face, reported that the billionaire spent time with Anthropic leaders and came away reassured: “Everyone I met was highly competent and cared a great deal about doing the right thing. No one set off my evil detector,” he told followers.

In AI Magazine’s account, Musk himself framed it as a safety due-diligence tour: “I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed… After that, I was ok leasing Colossus 1 to Anthropic, as SpaceXAI had already moved training to Colossus 2.”

The neocloud theory: xAI as infrastructure business

Outside the Musk orbit, analysts read the move as a strategic tell. TechCrunch asked bluntly: “Is xAI a neocloud now?” noting that the Anthropic partnership “immediately monetized one of the company’s most impressive accomplishments, turning xAI from a consumer to a provider of compute.”

Instead of hoarding Colossus 1 capacity to train its own future Grok successors — the path Google and Meta have chosen, even at the expense of cloud revenue — xAI is selling the whole thing to a direct model rival. “It suggests the company’s real business may be more about building data centers than training AI models,” TechCrunch argued, at a moment when “companies like Google and Meta, who are also training models, are building more data centers” and jealously guarding their GPU stockpiles.

Axios read the same signal: SpaceX is “working to become an AI powerhouse before an expected IPO this fall,” and the Anthropic lease shows how its infrastructure can be spun into cash even when xAI’s own products stumble.

That reading meshes uncomfortably well with local reports out of Memphis. Business Insider has chronicled how Musk’s rapid Colossus buildout “provoked ire locally, with residents complaining of pollution from gas turbines” powering the massive data centers. Selling all that capacity to third parties doesn’t make those emissions go away; it just puts more pressure on keeping the turbines spinning.

Claude, by Musk — whether users like it or not

For Anthropic’s customers, the upside is obvious: fewer “capacity constrained” error messages and bigger budgets for code and tokens. Multiple outlets — The Next Web, Ars Technica, The Verge — all emphasized the same concrete win: Claude Code’s five‑hour rate limits are doubling for Pro, Max, Team and Enterprise, peak‑hour throttling is gone, and Claude Opus API limits are jumping significantly.

But that new reliability comes with a branding sting Anthropic can’t fully control. Business Insider ran back‑to‑back pieces underlining that Claude’s newfound muscle is literally Musk‑powered. One called the agreement “Anthropic taps Elon Musk’s SpaceX for more AI compute power,” detailing how the lab will draw “more than 300 megawatts of computing capacity within the month, across more than 220,000 Nvidia GPUs” from Colossus One. Another, more biting headline declared: “Claude, brought to you by Elon Musk.”

There’s a reputational tension here. Anthropic has marketed itself as the safety‑first, governance‑heavy alternative to OpenAI — the lab that writes constitutional AI manifestos and frets publicly about catastrophic risk. Now its flagship product’s uptime is deeply entangled with Musk, whose public persona veers from anti‑“woke” crusader to self‑styled guardian against “hostile AI.”

Some coverage treats that as an almost existential irony. Axios framed the deal as “Business imperatives turn[ing] Musk’s AI rival into a partner,” pointing out that the same man who spent months suing OpenAI and blasting Anthropic is now underwriting their capacity needs because it helps him and hurts Sam Altman at the same time.

Beyond Earth: the orbital compute gambit

Both Anthropic and SpaceX plainly see this as chapter one, not the whole story. Buried in Anthropic’s own announcement and amplified by xAI is a more audacious clause: “As part of this agreement, we have also expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity.”

xAI’s press release made the motivation explicit: “The compute required to train and operate the next generation of these systems is outpacing what terrestrial power, land, and cooling can deliver on the timelines that matter,” and “space-based compute offers near-limitless sustainable power with less impact on Earth” if the engineering challenges can be solved.

On X, Musk retweeted xAI’s note that “SpaceXAI and @AnthropicAI have also expressed interest in partnering to develop multiple gigawatts of orbital AI compute capacity,” turning what could have been fine print into a flagship talking point for his Kardashev‑scale, civilization‑upgrading narrative.

AI Magazine underscored how tightly those ambitions are now coupled: SpaceXAI has “allowed Anthropic access to Colossus 1 AI supercomputer, increasing Claude limits, as the duo partners on multiple GWs of orbital AI compute,” positioning this terrestrial lease as the on‑ramp to space‑based superclusters.

Who’s really in control?

In the near term, there’s no mystery about who gains what. Anthropic gets to turn away fewer paying customers and shake off its reputation for stingy limits. Musk gets billions in revenue, a marquee AI customer, and a clean way to portray his giant GPU hoard as a bet on “making our entire civilization win,” rather than a stranded asset.

Longer term, the partnership raises sharper questions. If xAI is, as TechCrunch puts it, behaving more like a “neocloud” than a pure model lab, then Anthropic is now one of its anchor tenants — and a proof point when it goes shopping for more. If orbital compute ever leaves the PowerPoint stage, the same duo that just rewired Claude to Memphis could be wiring the next generation of frontier systems to servers in orbit.

Anthropic wanted out of a compute corner. It found daylight — and a door — in Elon Musk’s data centers. The open question is how much of Claude’s future it just agreed to route through that door.

Story coverage