Table of Links
Abstract and 1 Introduction
2 Background
2.1 Large Language Models
2.2 Fragmentation and PagedAttention
3 Issues with the PagedAttention Model and 3.1 Requires re-writing the attention kernel
3.2 Adds redundancy in the serving framework and 3.3 Performance Overhead
4 Insights into LLM Serving Systems
5 vAttention: System Design and 5.1 Design Overview
5.2 Leveraging Low-level CUDA Support
5.3 Serving LLMs with vAttention
6 vAttention: Optimizations and 6.1 Mitigating internal fragmentation
6.2 Hiding memory allocation latency
7 Evaluation
7.1 Portability and Performance for Prefills
7.2 Portability and Performance for Decodes
7.3 Efficacy of Physical Memory Allocation
7.4 Analysis of Memory Fragmentation
8 Related Work
9 Conclusion and References
In a recent work, GMLake [35] showed that using CUDA virtual memory support can mitigate fragmentation in DNN training jobs, increasing training batch size. In particular, GMLake uses CUDA support to coalesce multiple smaller physical memory pages into a single virtually contiguous object that can prevent out-of-memory errors for large object allocations. In contrast, vAttention is focused on avoiding fragmentation for LLM inference. Different from training, LLM inference is latency sensitive and requires smaller granularity allocations. We proposed various LLM inference specific optimizations to meet these requirements.
Optimizing LLM inference is an active area of research. Various scheduling systems have been proposed to improve different aspects of LLM serving. For example, Orca [47] and vLLM [39] are aimed at improving serving throughput with efficient batching. Sarathi [26] and SplitFuse [36] split a long prefill into multiple smaller chunks and combine decode tokens with each chunk to improve GPU compute utilization. Based on similar techniques, Sarathi-Serve [25] proposes stall-free batching to minimize the impact of longrunning prefill iterations on decode latency. Splitwise [41], DistServe [49] and TetriInfer [38] disaggregate the prefill and decode phases, executing them on different replicas so as to avoid interference between the prefill and decode requests. For offline inference on resource-constrained devices, FlexGen [43] proposed a scheduling and offloading strategy to improve throughput. FastServe [45] minimizes job completion times in LLM inference using preemptive scheduling.
For all the above systems to work effectively, efficient use of GPU physical memory is essential. Since vLLM, PagedAttention has been adopted in various serving frameworks e.g., TensorRT-LLM [14], LightLLM [12] and kernel implementations e.g., in FlashAttention [9] and FlashInfer [11]. In contrast, vAttention offers an alternate approach for dynamic KV-cache memory management. We show that using system support for demand paging can easily add dynamic memory management support to existing kernel implementations.
Authors:
(1) Ramya Prabhu, Microsoft Research India;
(2) Ajay Nayak, Indian Institute of Science and Contributed to this work as an intern at Microsoft Research India;
(3) Jayashree Mohan, Microsoft Research India;
(4) Ramachandran Ramjee, Microsoft Research India;
(5) Ashish Panwar, Microsoft Research India.