Serving Large Language Models (LLMs) at scale is complex. Modern LLMs now exceed the memory and compute capacity of a single GPU or even a single multi-GPU node. As a result, inference workloads for 70B+, 120B+ parameter models, or pipelines with large context windows, require multi-node, distributed GPU deployments.
This challenge is sparking innovations in the inference stack. That’s where Dynamo comes in. Dynamo is an open-source framework for distributed inference. It manages execution across GPUs and nodes. It breaks inference into phases, like prefill and decode. It also separates memory-bound and compute-bound tasks. Plus, it dynamically manages GPU resources to boost usage and keep latency low.
Dynamo allows infrastructure teams to scale inference capacity responsively, handling demand spikes without permanently overprovisioning expensive GPU resources. The framework works with any inference engine. It supports TensorRT-LLM, vLLM, SGLang, and more. This gives organizations flexibility in their technology choices.
Microsoft Azure and NVIDIA recently collaborated to showcase the open-source NVIDIA Dynamo framework. This collaboration showed that disaggregation, smart caching, and dynamic resource allocation can run high-performance AI workloads on Kubernetes well. The recent report details how the authors used Dynamo on a Kubernetes cluster (AKS). This setup runs on special rack-scale VM instances, the ND GB200-v6, featuring 72 tightly integrated NVIDIA Blackwell GPUs.
They used this setup to run the open-source 120B-parameter model GPT-OSS 120B, using a tested ‘InferenceMAX’ recipe. This setup achieved 1.2 million tokens per second, showing that Dynamo can handle enterprise-level inference tasks on regular clusters.
The deployment used standard cloud-native tools like GPU node pools, Helm for Dynamo, and Kubernetes for orchestration. This shows that organizations can benefit from Dynamo without needing custom infrastructure.
Dynamo’s core innovation centers on disaggregating the prefill and decode phases of LLM inference onto separate GPUs. The prefill phase, which processes input context, is compute-intensive, while the decode phase, which generates output tokens, is memory-bound. By separating these phases, each can be optimized independently with different GPU counts and parallelism strategies.
This architectural choice solves a common issue with inference workloads. Take an e-commerce app that creates personalized product recommendations. It might process thousands of tokens for user and product context (heavy prefill), but only generate short 50-token descriptions (light decode). Running both tasks on the same GPU wastes resources. With disaggregated serving, prefill GPUs handle compute-heavy tasks, while decode GPUs focus on memory bandwidth and capacity.
The framework features dynamic GPU scheduling that adapts to changing demand. This lets the inference system scale resources based on real-time traffic. A Planner component, based on SLAs, forecasts traffic using time-series data. It adjusts GPU allocation between prefill and decode workers to meet latency goals, such as Time to First Token and Inter-Token Latency.
During traffic surges, the system can reallocate GPUs from decode to prefill operations or spin up additional resources. When the load decreases, resources scale back down. This elasticity helps organizations meet service level objectives without overprovisioning hardware.
Dynamo includes an LLM-aware router that tracks the location of key-value (KV) cache across GPU clusters. When requests arrive, the router calculates overlap between the new request and cached KV blocks, directing traffic to GPUs that can maximize cache reuse. This approach reduces redundant computation, particularly valuable when multiple requests share common context.
For memory management, the KV Block Manager moves rarely accessed cache blocks to CPU RAM, SSDs, or object storage. This caching method allows scaling cache storage to petabytes and keeps reuse efficient. Without offloading, more concurrent sessions per GPU can lead to evictions and expensive recomputations. With offloading, GPUs can handle more users while keeping latency low.
Dynamo is positioned as the successor to NVIDIA Triton Inference Server, incorporating lessons learned from earlier inference serving frameworks. Built with Rust for performance and Python for extensibility, the project is fully open source on GitHub.
