Intel Xeon 6 server processors will be used as the host CPU in NVIDIA DGX Rubin NVL8 systems. An interesting collaboration between the two giants of CPUs and GPUs respectively, which has been announced as one of the great novelties of NVIDIA’s GTC 2026 conference.
In this way, Intel underlines its role in the orchestration, scalability and security of modern AI infrastructureproviding architectural continuity and scalability for GPU-accelerated AI systems as workloads evolve toward real-time massive inference.
«AI is happening from large-scale training to real-time inference and anywhere, driven by AI systems and agent-based reasoning »says Jeff McVeigh, corporate vice president and general manager of Strategic Data Center Programs at Intel. «In this new era, the main CPU is essential. Controls orchestration, memory access, model safety, and performance on GPU-accelerated systems. Intel Xeon 6 delivers leading performance, efficiency, and compatibility with the extensive x86 software ecosystem customers rely on to scale inference workloads..
Intel Xeon 6 en NVIDIA DGX Rubin NVL8
As organizations continue to deploy AI systems, inference is increasingly defined not only by GPU performance, but also by CPU-led system performance, as The main processor determines the overall efficiency of the cluster and the total cost of ownership. It is also responsible for critical functions such as memory management, task orchestration, and workload distribution, ensuring the security, reliability, and operational continuity essential for modern AI infrastructure.
Based on these system requirements, Intel
Additionally, Intel’s PCIe and I/O capabilities reinforce Xeon’s role as a high-bandwidth, low-latency platform for diverse workloads, with features such as:
- Efficient performance per watt.
- Optimized support across the AI software ecosystem, including new support for NVIDIA Dynamo that enables heterogeneous inference on CPUs and future GPUs.
- Demonstrated reliability in mission-critical environments.
- Superior orchestration of GPU-accelerated heterogeneous systems.
This selection, explains Intel, “reinforces Xeon’s position as a fundamental pillar of modern AI infrastructure, enabling scalable deployment in modern data centers, cloud and edge use cases”. As AI inference expands, end-to-end confidential computing becomes essential, from the CPU data paths to the GPU. Intel Trust Domain Extensions (TDX) adds hardware-based isolation and certification, further reinforcing the selection of Xeon as the secure foundation for modern AI clusters.
NVIDIA DGX Rubin NVL8 systems integrate Intel Xeon 6 processors based on the architecture established with the Intel Xeon 6776P in current NVIDIA Blackwell-based platforms, including DGX B300 systems. By leveraging this proven foundationIntel helps bring system-level performance, experience and know-how to the new DGX Rubin NVL8 systems.
Intel designed Xeon so that these systems get the most out of their GPUsusing features like Priority Core Turbo to keep data flowing to the GPUs. Additionally, with its strong performance in single-threaded tasks, which handles orchestration, scheduling, and data movement, Xeon ensures smooth and efficient operation, even as inference workloads become more complex.
