AMD today announced a series of launches focused on improving performance in data centers, artificial intelligence applications and high-performance systems. Among the new features presented, the 5th generation EPYC processors stand out, new AMD Instinct MI325X accelerators and Ryzen AI PRO 300 Series processors. These solutions are designed to address the growing demands of sectors such as cloud computing, artificial intelligence and business productivity.
The new ones 5th generation EPYC processorsbased on the Zen 5 architecture, offer up to 192 cores per processor and are aimed at optimizing performance in data centers and AI-intensive workloads. Along with them, the AMD Instinct MI325X accelerators, with the CDNA 3 architecture, are designed to improve performance in AI model training and inference tasks, thanks to their HBM3E memory capacity and high bandwidth.
On the other hand, Ryzen AI PRO 300 Series processors are aimed at empowering commercial PCs with advanced AI capabilities, such as real-time transcription and translation, while improving energy efficiency and security. With these launches, AMD continues to expand its portfolio of solutions to deliver greater performance and efficiency in business and technology environments.
But let’s see, below, the main new features of these announcements.
EPYC
Among today’s announcements, AMD has introduced its new 5th generation EPYC series of processors, aimed at improving performance and efficiency in data centers and for enterprise applications. Based on the Zen 5 architecture, which we already saw debut in the Ryzen 9000 for PC, these integrated offer up to 192 cores per unitmaking them a top choice for environments and workloads that require high processing power, such as artificial intelligence, cloud services, and critical business applications. This new generation combines an increase in performance with a focus on energy efficiency, an increasingly relevant feature in modern data centers.
Among the new features of these processors, their SP5 platform supportwidely used, and the ability to handle up to 12 channels of DDR5 memory per processor. These improvements enable increased bandwidth and optimized performance in demanding workloads. Additionally, they include support for PCIe Gen5 and AVX-512, features that reinforce their ability to handle compute-intensive applications. These specifications can improve operational efficiency and reduce latency in data centers that, as you know, support multiple simultaneous workloads.
The 5th generation EPYC processors have shown remarkable performance compared to competitors. Tested by AMD, they deliver up to 3.9x more performance in high-performance scientific (HPC) applications. and up to four times faster video transcoding, compared to Intel Xeon Platinum 8592+ processors. These figures highlight the ability of the new EPYC to manage critical tasks, while providing significant improvements in energy efficiency, making them an option to consider for infrastructure optimization.
In the field of artificial intelligence, AMD has developed specific models, such as the EPYC 9575Foffering a 28% increase in processing performance compared to alternatives. This capability is key in environments where AI solutions require real-time data processing and high computing capacity. Additionally, these processors are designed to work optimally in workloads that combine CPU and GPU, maximizing performance in advanced AI systems.
Finally, AMD has confirmed support from server manufacturers such as Dell, Lenovo and HPEwhich facilitates the adoption of this new series in data centers. Compatibility with existing infrastructure and focus on energy efficiency and performance positions EPYC 5th Generation processors as a competitive option for organizations seeking to improve the performance of their technology operations, while optimizing their operating costs.
Ryzen Pro
Another important announcement today was its new series of Ryzen AI PRO 300 processorsdesigned to improve the performance of business teams with advanced artificial intelligence (AI) capabilities. Based on the Zen 5 architecture and with XDNA 2 support, these processors integrate an NPU to perform AI tasks such as transcription and translation in real time, among other business applications that require intensive processing and, of course, the privacy that computing provides. of AI in the client.
On the Ryzen AI PRO 300 series includes models that reach up to 50 TOPS in AI processingenabling you to meet advanced business software requirements. According to AMD, the new processors offer up to three times the performance in AI tasks compared to the previous generation. In addition, they are designed to provide greater energy efficiency, optimizing battery life without affecting performance in the most demanding applications.
Among the models presented, the Ryzen AI 9 HX PRO 375 stands out for offering, according to AMD tests, Up to 40% more performance compared to competitive productslike the Intel Core Ultra 7 165U. This new line of processors also includes enhanced security features such as secure boot and cloud recovery, enabling more efficient device management in enterprise environments.. Manufacturers such as HP and Lenovo have already announced equipment that will use these new processors, expanding the options available to corporate users.
AMD Instinct Accelerators
Aware of the enormous weight that artificial intelligence and, with it, specialized hardware, have taken on, AMD today presented its new AMD Instinct MI325X acceleratorsdesigned to improve performance in artificial intelligence applications and high-performance workloads in data centers. Based on the CDNA 3 architecture, these accelerators are optimized for tasks that require intensive parallel processing, such as training AI models.
The AMD Instinct MI325X have an HBM3E memory capacity of 256 GB and a bandwidth of 6.0 TB/swhich allows processing large volumes of data quickly and efficiently, something that is key in environments that require the management of information in real time or the training of large-scale models. According to AMD, the new accelerators offer up to 1.8x more capacity and 1.3x more bandwidth compared to their predecessors.
In terms of performance, the MI325X have shown significant improvements in AI tasks, such as inference in models like Mistral 7B and Llama 3.1. These accelerators excel in FP16 and FP8 precision calculations, outperforming other products on the market in some tests. Additionally, AMD has announced that these accelerators will be available in systems from manufacturers such as Dell, Lenovo and Supermicro, facilitating their adoption in data centers.
AMD Thinking
Last but not least, today we also experienced the presentation of the DPUs (data processing units) of the two AMD Pensando families: Salina and Pollara.
AMD Thinking Salina
The AMD Pensando Salina data processing unit delivers a two-fold increase in performance compared to previous generations. This model is designed to manage the part front-end of artificial intelligence networksoptimizing the way data is transferred to AI clusters. With support for transfer rates of up to 400 Gbps, the Salina DPU focuses on improving efficiency and security in networks that handle large volumes of data in real time, which is key for AI applications and large-scale data centers.
The AMD Pensando Salina is also designed to improve the scalability and performance of systems, making it a faster and more secure solution for managing data flows. Additionally, it allows data center operators to optimize their infrastructures by reduce the load on other processorsfreeing them for other critical tasks. This new DPU is in testing with customers and is scheduled to be available during the first half of 2025.
AMD Thinking Pollara 400
The AMD Pensando Pollara 400 is the first network card (NIC) compatible with Ultra Ethernet Consortium standards (UEC), a consortium of manufacturers working on the development of advanced technologies for high-performance networks. This model is designed to improve data transfer in el back-end of artificial intelligence systemsmanaging communication between accelerators and clusters more efficiently. The Pollara 400 is equipped with the AMD P4 programmable engine, developed to support next-generation networks that require high performance and low latency in AI-intensive environments.
The Pollara 400 network card is aimed at meeting the growing needs of data centers operating with AI-based infrastructures. With support for UEC standards and advanced data management capabilities, this NIC promises to improve both performance and scalability of operationsensuring greater efficiency in the management of large volumes of data. This model is scheduled to be available in the first half of 2025.
More information