OpenAI has signed a new agreement with AWS as part of its multi-cloud strategy and is spending heavily to secure its AI computing supply chain.
The company recently ended its exclusive cloud computing partnership with Microsoft. Since then, the company has reportedly allocated $250 billion to Microsoft, $300 billion to Oracle, and now $38 billion to Amazon Web Services (AWS) in new multi-year agreements. The $38 billion deal with AWS is the smallest of the three deals, but is part of OpenAI’s diversification plan.
For industry leaders, OpenAI’s actions demonstrate that access to high-performance GPUs is no longer an on-demand commodity. It is now a scarce resource that requires huge long-term investments.
The AWS agreement provides OpenAI with access to hundreds of thousands of NVIDIA GPUs, including the new GB200 and GB300, and the ability to leverage tens of millions of CPUs.
This powerful infrastructure isn’t just for training tomorrow’s models. This is required to run ChatGPT’s large-scale inference workloads today. “Scaling frontier AI requires reliable computing at scale,” said Sam Altman, co-founder and CEO of OpenAI.
This surge in spending is forcing hyperscalers to respond competitively. While AWS remains the industry’s largest cloud provider, Microsoft and Google have recently recorded rapid growth in cloud revenue, often by acquiring new AI customers. This agreement with AWS is simply an attempt to secure the underlying AI workloads and prove AI capabilities at scale, including running clusters of over 500,000 chips.
AWS doesn’t just provide standard servers. We are building a sophisticated architecture specifically for OpenAI that uses EC2 UltraServers to link GPUs and provide the low-latency networking needed for training at scale.
“The breadth and immediate availability of optimized compute shows why AWS is uniquely positioned to support OpenAI’s massive AI workloads,” said Matt Garman, CEO of AWS.
But “immediate” is relative. The full capacity of OpenAI’s latest cloud AI contract will be fully deployed by the end of 2026, with options for further expansion through 2027. This timeline provides some reality for executives planning the deployment of AI. Hardware supply chains are complex and operate on multi-year timelines.
So what should corporate leaders take away from this?
First, the “build or buy” AI infrastructure debate is largely over. OpenAI spends hundreds of billions of dollars building on top of rented hardware. Few, if any, other companies can or should follow suit. This will firmly shift the rest of the market to managed platforms like Amazon Bedrock, Google Vertex AI, and IBM watsonx, with hyperscalers absorbing this infrastructure risk.
Second, the days of being able to source AI workloads in a single cloud may be over. OpenAI’s shift to a multi-provider model is a textbook case for mitigating concentration risk. For CIOs, relying on a single vendor for computing to run core business processes is becoming a gamble.
Finally, AI budgeting has left the realm of departmental IT and entered the world of corporate capital planning. These are no longer variable operating expenses. Securing AI computing is a long-term financial commitment, like building a new factory or data center.
See also: Qualcomm unveils AI data center chip to enter inference market

Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other major technology events such as Cyber Security Expo. Click here for more information.
AI News is brought to you by TechForge Media. Learn about other upcoming enterprise technology events and webinars.

