A new feature of Intel Xeon 6 “Birch Stream” platforms is the “Latency Optimized Mode” performance setting. The Intel Latency Optimized Mode will keep the uncore clock frequencies higher for more consistent performance but at the cost of increased power use. For those wondering about the performance and power impact, here are some comparison benchmarks of engaging this Latency Optimized Mode with Intel Xeon 6980P “Granite Rapids” server processors.
When Granite Rapids launched last year I had wanted to dive into this little advertised BIOS option but alas my Intel AvenueCity reference server had ultimately failed before getting to that testing. Thankfully Giga Computing sent over the Gigabyte R284-A92-AAL 2U server platform for dual Xeon 6900 series testing that I have been running the past few months for fresh Xeon 6980P Linux benchmarking at Phoronix.
After recently revisiting the MRDIMM vs. DDR5 performance for Xeon 6, the AMX performance benefits for AI, SNC3 vs. HEX mode performance, and Linux kernel optimizations, it was finally time to check out the Latency Optimized Mode on Xeon 6.
Intel’s documentation on the Latency Optimized Mode new to Birch Stream is for helping ensure better latency by maintaining higher uncore frequencies consistently unless limited by the RAPL/TDP power or other platform constraints. By default Latency Optimized Mode is disabled due to the impact on power consumption.
This round of testing with two Intel Xeon 6980P Granite Rapids processors on the Giga Computing R284-A92-AAL1 server and running Ubuntu 25.10 while running a Linux 6.18 kernel development version. Tests were done with the BIOS defaults and then repeating the tests when enabling the Latency Optimized Mode from the BIOS. In addition to the raw performance, the overall system power consumption was also monitored via the BMC for seeing the impact on power use.
