Following the AMD ZenDNN 5.0 release from last year’s EPYC Turin launch that brought big performance improvements for CPU-based inferencing with this open-source library compatible with Intel’s oneDNN, today marks the availability of ZenDNN 5.1 as the next update.
ZenDNN 5.1 is now available for this AMD Zen optimized Deep Neural Network Library for CPU-based AI inferencing. ZenDNN 5.1 is optimized for use with TensorFlow 2.19 and PyTorch 2.7.
With ZenDNN 5.1 there was a focus on better optimizing for large-scale Recommender Systems like DLRMv2 and DIEN. Concat optimizations are leading to around a 28% performance gain for the DIEN BF16 model.
ZenDNN 5.1 also introduces new operator fusions for MatMul + BiasAdd + Tanh and MatMul + BiasAdd + Sigmoid. The new ZenDNN release also brings a new kernel for BF16/FP32 matrix multiplication that can lead to better performance with the DIEN model. This AMD library also adds Ahead-Of-Time (AOT) reordering for MATMUL kernels across INT8 / BF16 / FP32 data types.
Downloads and more details on today’s AMD ZenDNN 5.1 feature release via GitHub.