Human
Amazon is already offering new OpenAI products on AWS
A day after OpenAI got Microsoft to agree to end exclusive rights, AWS announced a slate of OpenAI model offerings, including a new agent service.
12 days ago
OpenAI’s most advanced models are no longer chained to a single cloud. Within 48 frenetic hours, Microsoft’s grip loosened, Amazon pounced, and OpenAI tried to convince the world this is strategic expansion—not a scramble to pay for an eye‑watering AI infrastructure bill.
For three years, the generative AI boom was defined by a simple structural fact: if you wanted OpenAI’s frontier models at cloud scale, you went through Microsoft Azure. That arrangement ended when Microsoft agreed to drop its exclusive reselling rights, converting its license to OpenAI’s intellectual property from exclusive to non‑exclusive while keeping it in place through 2032.
The timing wasn’t just contractual housekeeping. The shift came as OpenAI faces a wall of financial and operational pressure: the company reportedly missed key revenue and user growth targets, with an expected $25 billion in cash burn against $30 billion in revenue and “hundreds of billions” in infrastructure commitments to hyperscalers—AWS, Azure, and Oracle—based on growth it has not yet proven it can deliver.
In other words: the era of a single, privileged cloud pipeline for OpenAI was colliding with the reality of massive, multi‑cloud bills that require one thing—distribution.
Microsoft’s move freed OpenAI’s models to live on rival clouds. Amazon didn’t wait.
On Tuesday, Amazon Web Services announced that it would begin selling OpenAI’s models to its cloud customers—just one day after the Microsoft restructuring was unveiled. Some of OpenAI’s latest models would be available in preview immediately, with the most powerful GPT models following within weeks.
This wasn’t just opportunistic timing. It completed a restructuring that had started months earlier, when Amazon committed up to $50 billion as part of OpenAI’s $110 billion funding round, valuing the ChatGPT maker at a staggering $852 billion—Amazon’s largest‑ever investment in any company. In return, OpenAI committed to spending $100 billion on AWS computing power and Trainium chips over eight years, consuming two gigawatts of capacity.
AWS chief executive Matt Garman framed the deal as overdue customer demand finally being met: “It’s something that our customers have asked for, for a really long time.” That line does double duty: it casts AWS as simply responding to market pressure, and it subtly positions Microsoft’s exclusivity as an artificial constraint on choice.
Tech press watching the market wasted no time in spelling out what this meant: “Amazon is already offering new OpenAI products on AWS,” TechCrunch reported, underscoring how quickly AWS moved once the Microsoft wall came down. Another outlet put the stakes in starker terms: “AWS to sell OpenAI models after Microsoft drops exclusivity, as OpenAI misses revenue targets and faces $100B infrastructure commitments.”
Two days later, OpenAI tried to seize the narrative.
In its own announcement, the company described the development not as a retreat from its Microsoft‑first past, but as an expansion of a “strategic partnership” with Amazon: “Today, OpenAI and AWS are expanding our strategic partnership to help enterprises build using OpenAI capabilities in their AWS environments.”
The language is carefully calibrated. Instead of talking about exclusivity ending, OpenAI leans into enterprise pragmatism: “We’re excited to give AWS customers access to the best frontier models, agents, and tools, which will operate within the systems, security protocols, compliance requirements, and workflows they already use.”
The expanded partnership turns into a product story, with three pillars “launching today in limited preview”: OpenAI models on AWS, Codex on AWS, and Amazon Bedrock Managed Agents powered by OpenAI. OpenAI pitches this as a way to give organizations “more ways to use OpenAI across application development, software engineering, and agentic workflows—while building within the infrastructure, security, governance, and procurement workflows they already use on AWS.”
At the center is GPT‑5.5, which OpenAI calls its “best frontier model.” It’s coming to Amazon Bedrock so that “customers can now build with OpenAI models in AWS, alongside the services, security controls, identity systems, and procurement processes they already rely on.”
OpenAI casts this as a friction‑reduction play: “For many companies, using AI at scale requires bringing the best models to the systems their teams already use. That’s why we’re launching OpenAI models, including our best frontier model GPT‑5.5, on Amazon Bedrock.” The promise is a “clear single path from experimentation to production,” with OpenAI capabilities living inside the AWS environments where enterprises already run their most important workloads.
Codex, OpenAI’s code‑generation and software‑automation workhorse, is also part of the package. OpenAI says “more than 4 million people now use Codex every week,” applying it across the software development lifecycle—from writing and refactoring code to generating tests and modernizing legacy systems. Increasingly, Codex is also being used to “accelerate research, analysis, and document-based work” by connecting to everyday apps and tools, from summarizing source material to building decks and spreadsheets.
The subtext of OpenAI’s narrative: this isn’t about bailing out of an exclusive relationship; it’s about meeting developers where they already live—on AWS as well as Azure.
From Amazon’s vantage point, this is a long‑awaited correction. After years of watching Microsoft enjoy a de facto monopoly on OpenAI’s most powerful models at cloud scale, AWS can now tell customers they don’t have to choose between the dominant cloud platform and the most hyped AI stack.
Garman’s comment that customers had been asking for this “for a really long time” is more than a talking point. It implies pent‑up demand and paints AWS as finally unshackled to compete on AI content, not just infrastructure. It also offers a neat internal story: Amazon’s record‑breaking $50 billion bet on OpenAI is already yielding a differentiated Bedrock and Managed Agents story.
OpenAI’s narrative centers on normalizing multi‑cloud and sidestepping the sense of crisis created by its financial obligations. Officially, this is about “bringing together” frontier models, Codex, and managed agents to give organizations “more flexibility in how they build with OpenAI, from new AI applications to intelligence embedded in existing products to agentic workflows that can reason, take action, and support more complex business processes.”
The company emphasizes integration over disruption: its models and tools will “operate within the systems, security protocols, compliance requirements, and workflows” enterprises already use on AWS. That messaging is aimed squarely at risk‑averse CIOs who want cutting‑edge AI without re‑architecting everything around a single provider.
Still, the hard numbers reported elsewhere—$25 billion in expected cash burn and $100 billion committed to AWS infrastructure alone—hang over the narrative. Whatever the spin, OpenAI now needs volume on every major cloud it can reach.
Microsoft’s voice is quieter in this round of announcements, but the restructuring is a loud signal. Its IP license remains in place through 2032, but is no longer exclusive. That suggests Redmond is confident it doesn’t need contractual chokeholds to stay central to the OpenAI ecosystem; its bet is that tight product integration (Windows, Office, GitHub, Azure) will do the work instead.
At the same time, letting exclusivity go broadens OpenAI’s revenue channels—making it more likely the company can actually pay for the massive Azure build‑out Microsoft has bankrolled. In that sense, allowing OpenAI onto AWS is a hedge for Microsoft as much as it is a competitive opening for Amazon.
For developers and enterprises, the practical implications are immediate:
The broader industry consequence is starker: the foundational AI race is no longer just about whose models are better. It’s about whose balance sheet can support tens or hundreds of billions in infrastructure, and which clouds can convert that capex into usage fast enough.
The question one analysis put most bluntly still hangs over all of this: “The question the deal answers is not whether OpenAI’s models are good enough to sell on rival clouds. The question is whether OpenAI can sell enough of them, anywhere, to justify what it has promised to spend.”
With OpenAI now wired into both Azure and AWS—and backed by commitments that would terrify most national treasuries—that question is no longer academic. It’s the business model.