Table of Links
Abstract and 1 Introduction
2 Background
2.1 Large Language Models
2.2 Fragmentation and PagedAttention
3 Issues with the PagedAttention Model and 3.1 Requires re-writing the attention kernel
3.2 Adds redundancy in the serving framework and 3.3 Performance Overhead
4 Insights into LLM Serving Systems
5 vAttention: System Design and 5.1 Design Overview
5.2 Leveraging Low-level CUDA Support
5.3 Serving LLMs with vAttention
6 vAttention: Optimizations and 6.1 Mitigating internal fragmentation
6.2 Hiding memory allocation latency
7 Evaluation
7.1 Portability and Performance for Prefills
7.2 Portability and Performance for Decodes
7.3 Efficacy of Physical Memory Allocation
7.4 Analysis of Memory Fragmentation
8 Related Work
9 Conclusion and References
3 Issues with the PagedAttention Model
Despite being inspired by demand paging, PagedAttention adopts an approach that is different from conventional demand paging: it requires an application’s code to be modified to adapt to dynamically allocated physical memory whereas conventional demand paging is transparent to applications. This section elaborates on some of the issues that arise with such an approach.
3.1 Requires re-writing the attention kernel
PagedAttention necessitates re-writing the attention kernel. This is because conventional implementations of the attention operator assume that the two input tensors K and V (Equation 2) are stored in contiguous memory. By departing from the conventional memory layout, PagedAttention requires an implementation of the attention operator to be modified so as to compute attention scores over non-contiguous KV-cache blocks. Writing correct and performant GPU kernels can be challenging for most programmers [15].
Being a fundamental building block of the transformer architecture, the attention operator has witnessed a tremendous pace of innovation in the systems and ML communities for performance optimizations [10, 27, 29–31, 33, 34, 37, 42, 46, 48], and this trend is likely to continue. In the PagedAttention model, keeping up with new research requires continued efforts in porting new optimizations to a PagedAttentionaware implementation. Production systems can therefore easily fall behind research, potentially losing performance
and competitive advantage. To provide an example, Figure 9b shows that the paged kernel of vLLM is already up to 2.85× slower than the FlashAttention counterpart for groupedquery attention [27].
Authors:
(1) Ramya Prabhu, Microsoft Research India;
(2) Ajay Nayak, Indian Institute of Science and Contributed to this work as an intern at Microsoft Research India;
(3) Jayashree Mohan, Microsoft Research India;
(4) Ramachandran Ramjee, Microsoft Research India;
(5) Ashish Panwar, Microsoft Research India.