š° AI News: OpenAI Just Spent $38 Billion to Stop Being Microsoft's Problem
In the clearest sign yet that OpenAI is done being exclusively dependent on Microsoft, the company just signed a $38 billion, seven-year deal with Amazon Web Services. This is OpenAI's first major partnership with AWS and comes less than a week after restructuring allowed them to bypass Microsoft's "right of first refusal" for cloud computing. Amazon's stock hit a record high on the news, jumping 4% and validating AWS's bet on AI infrastructure. The announcement: On November 3, 2025, OpenAI and Amazon Web Services announced a multi-year strategic partnership worth $38 billion over seven years, providing OpenAI immediate access to hundreds of thousands of NVIDIA GPUs through Amazon EC2 UltraServers. The deal includes capacity to scale to tens of millions of CPUs and represents one of the largest cloud computing agreements in history. AWS CEO Matt Garman stated that "AWS's best-in-class infrastructure will serve as a backbone for their AI ambitions," while OpenAI CEO Sam Altman emphasized that "scaling frontier AI requires massive, reliable compute." The partnership follows OpenAI's recent restructuring that granted greater operational freedom and came days after announcing an additional $250 billion commitment to Microsoft Azure. What's happening: OpenAI will immediately begin running core AI workloads on AWS infrastructure, accessing hundreds of thousands of state-of-the-art NVIDIA GB200 and GB300 GPUs. The clusters are designed using Amazon EC2 UltraServers with low-latency interconnects, enabling efficient processing for everything from ChatGPT inference to training next-generation frontier models. All included capacity is targeted for deployment by the end of 2026, with potential expansion into 2027 and beyond. The infrastructure will support both inference workloads (powering ChatGPT's real-time responses) and training for next-generation models at unprecedented scale. AWS brings experience running large-scale AI infrastructure with clusters exceeding 500,000 chips, providing OpenAI with immediate reliability at scale. The sophisticated architectural design optimizes maximum AI processing efficiency across interconnected systems.