AMD and Oracle have announced a significant expansion of its multigenerational partnership and long-term for chip supply. As you may have guessed, the voracious resource consumption of AI servers is behind the announcement.
The agreement is similar to the one announced by AMD and OpenAI last week and aims to create next-generation AI infrastructure and specifically, enhance Oracle Cloud Infrastructure (OCI) as it will be a launch partner of the first publicly available AI supercluster.
To this end, AMD will deliver Instinct Series MI450 Series accelerators, with an initial deployment of 50,000 units from the third quarter of 2026 and an expansion starting in 2027. This announcement builds on both companies working together to offer AMD Instinct GPU platforms on OCI to end customers, beginning with the launch of AMD Instinct MI300X GPUs in 2024 and extending through the general availability of OCI Compute with AMD Instinct MI355X GPUs.
AMD and Oracle, together for AI
Demand for large-scale AI capacity is accelerating as next-generation models overcome the limitations of current AI clusters. To train and run these workloads, customers need flexible, open computing solutions designed for extreme scalability and efficiency.
OCI’s planned new AI Superclusters will be based on AMD’s “Helios” rack design, which includes AMD Instinct MI450 Series GPUs, next-generation AMD EPYC CPUs codenamed “Venice,” and next-generation AMD Advanced Networking codenamed “Vulcano.” This vertically optimized rack architecture is designed to offer the maximum performance, scalability and energy efficiency for large-scale AI training and inference.
“Our customers are developing some of the most ambitious AI applications in the world, which requires a robust, scalable and high-performance infrastructure”explains Mahesh Thiagarajan, executive vice president of Oracle Cloud Infrastructure. “By integrating the latest innovations in AMD processors with OCI’s secure, flexible platform and advanced networking powered by Oracle Acceleron, customers can push the limits with confidence. “Thanks to our decade-long collaboration with AMD, from EPYC to AMD Instinct accelerators, we continue to offer the best value, open, secure and scalable cloud foundation in collaboration with AMD to meet customer needs in this new era of AI.”.
To provide more options for customers developing, training and inferring AI at scale, OCI also announced the general availability of the platform with current AMD Instinct MI355X accelerators. They will be available in the zetta-scale OCI supercluster, with the ability to scale up to 131,072 GPUs. AMD Instinct MI355X powered drives are designed for superior value, cloud flexibility, and open source support.
