NVIDIA has announced the global availability of the RTX PRO 5000 with 72 GB. He increased memory capacity over the original version is the most notable feature of a model with which it expands its offering of professional visualization solutions and offers greater flexibility to customers to adapt their systems to a wider range of budgets and project requirements.
This new GPU “gives AI developers, data scientists, and creative professionals the hardware for modern, memory-intensive workflows, and comes at a time when demand for Blackwell-class computing is greater than ever.”NVIDIA explains in the announcement.
RTX PRO 5000 con 72 GB
NVIDIA launched the original RTX PRO 5000 GPU with 48GB of GDDR7 memory. The memory was distributed in 24 modules of 2 GB capacity, installed on both sides of the PCB. The new RTX PRO 5000 with 72 GB is updated with new 3 GB memory modules for increase total capacity.
NVIDIA has a good number of different models in its catalog to cover the entire professional display segment, from the entry-level and mid-range (RTX PRO 4000 and 2000) to the high-end represented by the RTX PRO 6000. But keeping in mind that Memory, both in quantity and bandwidth, is key in graphics cardsthe new offering will be attractive to professionals in data science, AI, HPC and other areas such as professional video editing.
Beyond Memory the new version is similar to the original. Based on architecture’Blackwell‘ (the latest generation of the green giant) the card uses the GB202 graphics core and has 14,080 CUDA cores, 440 TMU and 176 ROP. Additionally, NVENC/NVDEC blocks have been enhanced to accelerate high-quality encoding and decoding for live production and fast video editing, while Multi-Instance GPU (MIG) gives IT and cloud/VDI administrators a simple way to split a GPU into isolated instances, so more users get guaranteed, accelerated performance.
Driving the next generation of AI development
AI is a central goal of the new GPU. And as generative AI evolves towards complex and multimodal agentic AI, the demand on hardware increases when developing and implementing these technologies.
A defining challenge of AI development is memory capacity. Running cutting-edge AI workflows, especially those involving large language models (LLMs) and AI agents, places significant pressure on GPU memory, particularly as models, context windows, and multimodal pipelines grow in size and complexity.
Agentic AI systems involve toolchains, retrieval augmented generation (RAG), and multimodal understanding. These systems often need to keep multiple AI models, data sources, and code formats active simultaneously within GPU memory.
For local AI developmentraw computing is only half the battle: memory capacity determines what users can run, and performance determines how fast it runs. In industry-standard benchmarks for generative AI, the RTX PRO 5000 72GB delivers 3.5x the performance of previous-generation NVIDIA hardware for image generation and twice the performance of previous-generation hardware for text generation.
In creative workflows, time saved rendering is time gained for iteration. In rendering engines like Arnold, Chaos V-Ray, and Blender, as well as real-time GPU renderers like D5 Render and Redshift, the RTX PRO 5000 72GB reduces render times by up to 4.7x. And for computer-aided engineering and product design, the 72GB RTX PRO 5000 delivers more than double the graphics performance, according to NVIDIA data.
