Advertisement
X

Is OpenAI Inflating the AI Bubble? ChatGPT Maker’s $38Bn Cloud Deal with Amazon Explained

OpenAI agreed a seven-year, $38 billion pact with AWS for Nvidia GB200/GB300 GPUs and expansive CPU capacity, aiming to scale ChatGPT training and inference with capacity coming online before end-2026

OpenAI CEO Sam Altman
Summary
  • OpenAI and AWS $38bn seven-year pact secures Nvidia GB200 and GB300 GPUs

  • Agreement supplies hundreds of thousands of accelerators, enabling agentic workloads and CPUs

  • Deal diversifies OpenAI beyond Microsoft; multi-cloud sourcing reduces vendor dependency

Advertisement

OpenAI on Monday announced a seven-year, $38 billion cloud computing agreement with Amazon Web Services (AWS). This deal will give the ChatGPT maker access to “hundreds of thousands” of Nvidia AI accelerators and the ability to scale to tens of millions of CPUs, with capacity targeted to come online before the end of 2026.

The deal is a multi-year strategic supply agreement. AWS says it will supply OpenAI with Nvidia GPUs (GB200s and GB300s) via EC2 UltraServer clusters and very large CPU pools, designed to support both inference for ChatGPT and the training of next-generation models.

AWS frames the pact as immediate and expandable capacity to “rapidly scale agentic workloads.” OpenAI’s Sam Altman called the move a step toward strengthening the “broad compute ecosystem.”

OpenAI’s Big Deal

This isn’t OpenAI’s first major infrastructure commitment. Between 2023 and 2025, the company expanded its partnerships beyond Microsoft, adding Google Cloud, Oracle, and other providers, and is estimated to have secured hundreds of billions of dollars in multi-year compute commitments.

Advertisement

In 2025, OpenAI’s biggest collaborations include a multi-billion-dollar partnership with SoftBank Group to establish SB OpenAI Japan; an expanded strategic alliance with Microsoft focused on Azure integration and IP sharing; and a landmark infrastructure deal with NVIDIA to deploy at least 10 gigawatts of systems backed by an investment commitment of up to $100 billion. The company also entered a major partnership with AMD to deploy 6 gigawatts of GPUs over multiple years.

OpenAI has publicly discussed plans to invest as much as $1.4 trillion to build roughly 30 gigawatts of computing capacity, an expansion Reuters describes as comparable to the scale of the world’s largest data centers, to meet its model-training ambitions.

Towards Amazon, Away from Microsoft

For AWS, landing OpenAI is a strategic win that undercuts narratives that it had fallen behind Microsoft and Google in the AI arms race.

Advertisement

PP Foresight analyst Paolo Pescatore told Reuters the arrangement is a “strong endorsement” of AWS’s ability to provide reliable, large-scale AI compute.

The deal also follows a governance and commercial restructuring at OpenAI that removed Microsoft’s prior right of first refusal for compute, enabling wider cloud sourcing.

OpenAI still has large commitments with Microsoft’s Azure but is deliberately diversifying to avoid single-vendor dependence, Reuters reported. This approach gives OpenAI negotiating leverage and resilience but also spreads demand (and revenue) across hyperscalers.

Has this Deal Inflated the AI Bubble?

Many observers say yes, at least in magnitude. The sheer scale of multi-hundred-billion (and in some coverage, trillion-plus) compute commitments by a single firm amplifies concerns that valuations and spending are running ahead of sustainable, near-term revenues.

Warnings have come from the Bank of England and the International Monetary Fund, as well as from JP Morgan boss Jamie Dimon, who told the BBC that "the level of uncertainty should be higher in most people's minds".

Advertisement

Reuters notes analysts and investors worrying that OpenAI’s enormous spending pledges and rapidly ballooning commitments could be signs of a frothy cycle. Regulators have already been watching hyperscaler-AI tie-ups, the FTC and others have scrutinised big tech deals in the space as competition and concentration questions surface.

Why AI Giants Keep Pouring Money in Compute?

Large language models and multimodal systems scale strongly with compute. Better results often require exponentially more GPUs for training and huge inference capacity to serve users.

Beyond model quality, firms chase product differentiation (agents, real-time multimodal apps) and enterprise revenue streams, making compute both an R&D requirement and a commercial bottleneck.

In short, to stay competitive in capability and latency you either buy chips or fall behind. Amazon’s own release underline that “scaling frontier AI requires massive, reliable compute.”

Industry Voices

OpenAI’s Sam Altman hailed the partnership as ecosystem strengthening, and AWS CEO Matt Garman said AWS is “uniquely positioned” to support vast AI workloads.

Advertisement

Independent analysts gave mixed takes, some called it a strong endorsement of AWS’s infrastructure, others warned the scale heightens bubble risk and financing pressure on a still-loss-making OpenAI. Reuters quotes Pescatore as calling the pact “hugely significant.”

In coming months, continued multi-cloud strategies from major model developers, more long-term capacity deals, and greater pressure on chip supply chains will be expected. Regulators and customers will also push for clearer terms around exclusivity, pricing and data controls.

Show comments