Anthropic’s latest power play isn’t a new model – it’s a mountain of GPUs. By locking in SpaceX’s Colossus 1 supercomputer, the safety‑first AI lab just turned a looming compute crunch into a growth story, while Elon Musk’s camp quietly recasts itself as a cloud giant in all but name.

Early May: The deal drops

On May 6, Anthropic and SpaceX/xAI simultaneously unveiled a sweeping compute deal centered on Colossus 1, a Memphis‑based AI supercomputer that SpaceXAI bills as “one of the world’s largest and fastest‑deployed AI supercomputers,” built from the ground up for large‑scale training and inference. The machine packs “over 220,000 NVIDIA GPUs, including dense deployments of H100, H200, and next-generation GB200 accelerators,” aimed at “large language models, multimodal systems, scientific simulations, and generative AI at frontier scale.”

Anthropic’s public spin was simple: more compute, more Claude. In its own announcement, the company said it had “signed an agreement with SpaceX to use all of the compute capacity at their Colossus 1 data center,” giving it access to “more than 300 megawatts of new capacity (over 220,000 NVIDIA GPUs) within the month.” That immediate bump, Anthropic said, would “directly improve capacity for Claude Pro and Claude Max subscribers.”

Instant impact: Claude gets uncapped

Anthropic didn’t wait to translate that hardware into product changes. Effective the same day, the company rolled out concrete upgrades to its most intensive offerings.

The firm said it was “doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and seat-based Enterprise plans,” scrapping “the peak hours limit reduction on Claude Code for Pro and Max accounts,” and “raising our API rate limits considerably for Claude Opus models.”

Tech press quickly connected the dots. The Verge reported that Anthropic is “doubling five-hour rate limits for many Claude Code users, removing Claude Code’s peak hours limit reduction, and significantly increasing API rate limits for Claude Opus models, starting today,” and that the company “credits the capacity to a new deal with SpaceX ‘to use all of the compute capacity at their Colossus 1 data center’ in Memphis, noting recent announcements with Amazon, Google, and Microsoft.”

The Next Web echoed the same story from a developer‑experience angle: “Claude Code’s five-hour rate limits double across Pro, Max, Team, and seat-based Enterprise plans from Tuesday, with peak-hours throttling removed for Pro and Max. The capacity behind the change is a new Anthropic agreement to take all of SpaceX’s Colossus 1 data centre.”

For power users and enterprises, the timeline was brutally short: one morning, Claude’s guardrails were throttling heavy usage; by evening, the taps were noticeably looser.

Anthropic’s grand compute strategy

Inside Anthropic’s narrative, this isn’t a one‑off sweetheart deal; it’s another tile in an aggressive mosaic of mega‑infrastructure bets.

In the same announcement, the company reminded readers that the SpaceX agreement “joins our other significant compute announcements,” including “an up to 5 gigawatt (GW) agreement with Amazon, which includes nearly 1 GW of new capacity by the end of 2026,” “a 5 GW agreement with Google and Broadcom, which will begin coming online in 2027,” “a strategic partnership with Microsoft and NVIDIA that includes $30 billion of Azure capacity,” and its “$50 billion investment in American AI infrastructure with Fluidstack.”

Anthropic stressed that it “train[s] and run[s] Claude on a range of AI hardware—AWS Trainium, Google TPUs, and NVIDIA GPUs—and continue[s] to explore opportunities to bring additional capacity online.” The through line: hedge vendor risk, grab every watt of compute it can, and turn that into user‑visible throughput as fast as regulators and data centers will allow.

That strategy is also outward‑facing. Anthropic framed the new capacity around enterprise and international expansion, saying its customers in “regulated industries like financial services, healthcare, and government increasingly need in-region infrastructure to meet compliance and data residency requirements,” and that some of its capacity will be focused on “democratic countries with secure legal and regulatory frameworks,” with a commitment to “investing back into host communities.”

On paper, Anthropic is trying to be the rare frontier‑model shop that can tell banks, hospitals, and governments: yes, we have the GPUs – and yes, they’re in jurisdictions your lawyers can live with.

SpaceX/xAI: from AI lab to neocloud?

If Anthropic is using the partnership to solve a demand problem, SpaceX/xAI is using it to crystallize a business model.

On its own site, SpaceXAI boasted that Colossus 1 “delivers unprecedented scale for AI training, fine-tuning, inference, and high-performance computing workloads,” and emphasized that it was “built from the ground up in record time.” That sounds like a flagship internal asset – but the Anthropic deal effectively turns it into a rental property.

TechCrunch framed the move as a turning point: “xAI and Anthropic announced a surprise partnership that has the Claude-maker buying out ‘all of the compute capacity at [xAI’s] Colossus 1 data center,’ roughly 300MW that allowed Anthropic to immediately raise its usage limits. It’s a huge deal for xAI, likely worth billions of dollars. More importantly, it immediately monetized one of the company’s most impressive accomplishments, turning xAI from a consumer to a provider of compute.”

That’s a very different posture from other AI labs sitting on scarce GPU clusters. As TechCrunch noted, when giants like Google or Meta face a choice “between selling more available compute to customers and preserving some to build their own tools, they reliably choose door No. 2.”

Here, Musk’s camp chose the opposite. TechCrunch reports that while some observers saw the move as a shot at OpenAI, Musk’s explanation on X was more prosaic: training had already moved to a newer Colossus 2, so “xAI simply didn’t need them both.”

There’s also the cold math: xAI’s flagship consumer product, Grok, has reportedly “seen plummeting usage since the image generation debacles earlier this year,” and if “xAI’s data center buildout is that much more than what Grok needs to operate, partnering with Anthropic adds a lot of green to the balance sheet,” especially as SpaceX/xAI “speeds toward an IPO.”

The upshot, as TechCrunch bluntly puts it: “the Anthropic partnership sends an unusual message about where Elon Musk’s priorities really lie. It suggests the company’s real business may be more about building data centers than training AI models.”

Orbital AI: hype, hedge, or inevitability?

Both sides are also using the deal as a launchpad for something far more sci‑fi: AI compute in orbit.

According to SpaceXAI, as part of the agreement Anthropic “expressed interest in partnering to develop multiple gigawatts of orbital AI compute capacity,” arguing that “the compute required to train and operate the next generation of these systems is outpacing what terrestrial power, land, and cooling can deliver on the timelines that matter.” The pitch is that “SpaceX is the only organization with the launch cadence, mass-to-orbit economics, and constellation operations experience to make orbital compute a near-term engineering program rather than a research concept,” and that if the engineering puzzles can be cracked, “space-based compute offers near-limitless sustainable power with less impact on Earth.”

Anthropic’s own announcement echoes that interest in “develop[ing] multiple gigawatts of orbital AI compute capacity” in partnership with SpaceX. For now, it’s an aspiration tacked onto a very terrestrial data‑center contract. But the strategic logic is clear: if AI demand keeps growing exponentially, even today’s 5‑GW‑scale land deals may look quaint by the early 2030s.

Competing interpretations

Put together, May 6 looks less like a simple vendor contract and more like a fork in the industry’s road.

From the AI‑lab perspective, Anthropic is locking in lifelines. Between Amazon, Google, Microsoft, Fluidstack, and now SpaceX, the company is assembling a diversified, multi‑vendor compute empire to avoid being hostage to any one hyperscaler – and to boast immediate, user‑facing benefits like doubled code limits and higher Opus API throughput.

From the infrastructure perspective, SpaceX/xAI is edging into “neocloud” territory: a specialist compute provider monetizing surplus capacity instead of guarding it, and hinting that the long game is data centers – eventually in orbit – more than chatbots.

And from the user and enterprise perspective, the story is refreshingly concrete. Overnight, Claude Pro and Max subscribers get more headroom; Claude Code becomes less of a bottleneck; and API customers can push Opus harder without slamming into rate limits.

The tension, of course, is whether this arms race in power‑hungry supercomputers – now stretching from Memphis up toward low Earth orbit – can stay aligned with the safety, governance, and environmental promises companies like Anthropic have built their brands on. For now, the only clear winner is throughput: more tokens, more calls, more models, more often.

In 2024, the AI story was about models. By mid‑2026, it’s increasingly about who owns the substations and launchpads.