PyTorch 2.6 is out today as the newest feature release to this widely-used machine learning library.
With PyTorch 2.6 there is now FP16 support for x86 CPUs both in the eager and Inductor modes. With Intel Xeon 6 P “Granite Rapids” processors this means taking advantage of Flaot16 with the Advanced Matrix Extensions (AMX). In the prior PyTorch 2.5 release the Float16 CPU support was considered prototype-level but now is considered beta-level with better performance and verifying its functionality across a range of workloads.
Meanwhile appearing in PyTorch 2.6 in prototype form is improving the user experience with Intel graphics. Both for Intel discrete and accelerated graphics there is better support with PyTorch 2.6, especially on Microsoft Windows. There is an easier software setup experience, improved Windows binaries, and enhanced coverage of Aten operators on Intel GPUs with SYCL kernels.
PyTorch 2.6 also brings several improvements to PT2, FlexAttention support on x86 CPUs for large language models (LLMs), and various other improvements.
Downloads and more details on today’s PyTorch 2.6 release via GitHub.