With memory pricing being as wild as it is these days and with MRDIMMs on Xeon 6 Granite Rapids offering much more memory bandwidth than conventional DDR5 RDIMMs, you may be wondering about the performance impact when not populating all twelve memory channels on the Xeon 6900 series processors. In this article are benchmarks to demonstrate the performance difference of MRDIMM-8800 memory across using six, eight, ten, and twelve MRDIMMs with a Xeon 6980P server.
Last year I provided benchmarks showing the DDR5-6400 vs. MRDIMM-8800 memory performance on the Xeon 6980P Granite Rapids server processor. With MRDIMMs providing significantly higher memory bandwidth and a lower effective latency, you may be wondering about the viability of going for less than twelve MRDIMMs especially with today’s memory pricing.
The MRDIMMs I have been using for all of my testing that were originally supplied by Intel as review samples with the Xeon 6980P were the Micron MTC40F2046S1HC88XDY / PC5-8800X-HA0-1110-XT MRDIMMs. Last year these MRDIMMs were around $450 USD per DIMM while checking various Internet retailers for this week show the 64GB MTC40F2046S1HC88XD1 as $987 to $1650 USD per DIMM. So for loading out all twelve memory channels can be as much as ~$19800 USD, or more than the price of a Xeon 6980P itself at currently around $6~7k USD. And those 12 memory modules would be just for a single socket setup.
So for mostly reference purposes using a single Xeon 6980P I ran a wide variety of benchmarks with the 6 / 8 / 10 / 12 MRDIMM-8800 population for looking at the impact on performance. The overall server power consumption was measured too for reference. All of this testing was done on the Gigabyte R284-A92-AAL1 kindly supplied by Giga Computing.
This server was run in the default SNC3 clustering mode. For more details on that see the prior SNC3 vs. HEX benchmarks on Granite Rapids.
