As the length of input text increases, the key-value (KV) cache in LLMs imposes prohibitive GPU memory costs and limits long-context inference on resource constrained devices. Existing approaches, such as KV quantization and pruning, reduce memory usage but suffer from numerical precision loss or suboptimal retention of key-value pairs. In this work, Low Rank Query and Key attention (LRQK) is introduced, a two-stage framework that jointly decomposes full-precision query and key matrices into compact rank-\(r\) factors during the prefill stage, and then employs these low-dimensional projections to compute proxy attention scores in \(\mathcal{O}(lr)\) time at each decode step. By selecting only the top-\(k\) tokens and a small fixed set of recent tokens, LRQK employs a mixed GPU-CPU cache with a hit-and-miss mechanism where only missing full-precision KV pairs are transferred, thereby preserving exact attention outputs while reducing CPU-GPU data movement. Extensive experiments on the RULER and LongBench benchmarks with LLaMA-3-8B and Qwen2.5-7B demonstrate that LRQK matches or surpasses leading sparse-attention methods in long context settings, while delivering significant memory savings with minimal accuracy loss. Our code is available at https://github.com/tenghuilee/LRQK.
翻译:随着输入文本长度的增加,大语言模型中的键值(KV)缓存会带来极高的GPU内存开销,从而限制了在资源受限设备上进行长上下文推理的能力。现有方法(如KV量化和剪枝)虽能降低内存使用,但存在数值精度损失或键值对保留效果欠佳的问题。本研究提出了低秩查询-键注意力机制(LRQK),该两阶段框架在预填充阶段将全精度查询矩阵和键矩阵联合分解为紧凑的秩-\(r\)因子,随后在解码阶段利用这些低维投影以\(\mathcal{O}(lr)\)时间复杂度计算代理注意力分数。通过仅选取top-\(k\)令牌及少量固定的近期令牌集合,LRQK采用具备命中-失效机制的混合GPU-CPU缓存架构,仅传输缺失的全精度KV对,从而在减少CPU-GPU数据传输的同时保持精确的注意力输出。基于LLaMA-3-8B与Qwen2.5-7B模型在RULER和LongBench基准测试上的大量实验表明,LRQK在长上下文场景中达到或超越了主流稀疏注意力方法的性能,并在保证精度损失最小的前提下实现了显著的内存节省。相关代码已开源:https://github.com/tenghuilee/LRQK。