top of page

OpenAI Signs $38 Billion Compute Deal with Amazon, Breaking From Microsoft Exclusivity

OpenAI has signed a massive $38 billion deal with Amazon Web Services, marking its first-ever partnership with the cloud leader and one of its most significant moves away from Microsoft. The agreement gives OpenAI access to hundreds of thousands of Nvidia GPUs hosted in AWS data centers across the U.S., with capacity expansions planned through 2026 and beyond.


Shares of Amazon jumped roughly 5% Monday after the announcement, underscoring the significance of the partnership for both companies. The deal immediately allows OpenAI to run workloads on AWS infrastructure while Amazon builds additional dedicated capacity for the ChatGPT maker. “It’s completely separate capacity that we’re putting down,” said Dave Brown, vice president of compute and machine learning services at AWS. “Some of that capacity is already available, and OpenAI is making use of that.”


The move highlights OpenAI’s rapid diversification since ending its exclusive cloud relationship with Microsoft earlier this year. Microsoft first invested in OpenAI in 2019 and has since poured roughly $13 billion into the company, but its right of first refusal for new compute requests expired last week. That opened the door for OpenAI to sign deals with Google, Oracle, and now AWS — the largest cloud provider by market share.


“Scaling frontier AI requires massive, reliable compute,” OpenAI CEO Sam Altman said in a statement. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”


Despite the new arrangement, OpenAI said it will continue spending heavily with Microsoft, committing to an additional $250 billion in Azure services announced last week. Still, the AWS deal cements OpenAI’s independence as it prepares for a likely IPO, part of a broader strategy to balance partnerships across major cloud providers.


For Amazon, the agreement is both a technical and symbolic win. AWS already hosts workloads for OpenAI rival Anthropic, in which Amazon has invested billions, and is currently constructing an $11 billion data center campus in Indiana dedicated to Anthropic’s training models. “The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads,” AWS CEO Matt Garman said Monday.


The OpenAI deal will initially rely on Nvidia’s newest Blackwell chips but could later incorporate other processors. While Amazon has promoted its own Trainium chips — used by Anthropic for lower-cost training — no such plans have been confirmed for OpenAI. The new capacity will power both training of next-generation foundation models and real-time inference for products like ChatGPT.


Industry analysts say the agreement represents a turning point in the cloud wars, giving OpenAI flexibility to hedge across hyperscalers while maintaining massive compute access at scale. It also signals that OpenAI’s long-term ambitions extend well beyond Microsoft’s orbit. With more than $1.4 trillion in infrastructure commitments across providers, the company is securing the computational foundation for the next decade of AI.


As Altman put it in his announcement, “This partnership is about ensuring we have the capacity to keep advancing safely and reliably — not just this year, but for many years to come.”


bottom of page