Large Language Models have excelled in various fields but encounter efficiency limitations due to the substantial Key-Value (KV) cache required for long-sequence inference. Recent efforts try to evict non-critical cache elements during runtime, thereby reducing cache size within given memory budgets while preserving generation quality. Our reexamination of foundational principles reveals that prevailing methods aim to minimize an upper bound of eviction loss, quantified as the L1 distance between the pre- and post-eviction outputs of multi-head self-attention mechanisms. Moreover, our analysis indicates that the common practices of uniformly assigning budgets across different attention heads during cache eviction hinder their budget utilization, negatively impacting generation quality. In light of these findings, we propose a simple yet effective adaptive budget allocation algorithm. This algorithm not only optimizes the loss upper bound in theory but also reduces the eviction loss in practice by aligning with the intrinsic patterns of self-attention mechanisms. Integrating this algorithm into two advanced methods, we develop Ada-SnapKV and Ada-Pyramid. Extensive evaluations on 16 datasets and the Needle-in-a-Haystack test confirm that they both significantly boost performance across various tasks.
翻译:暂无翻译