Streaming video large language models (LLMs) are increasingly used for real-time multimodal tasks such as video captioning, question answering, conversational agents, and augmented reality. However, these models face fundamental memory and computational challenges because their key-value (KV) caches grow substantially with continuous streaming video input. This process requires an iterative prefill stage, which is a unique feature of streaming video LLMs. Due to its iterative prefill stage, it suffers from significant limitations, including extensive computation, substantial data transfer, and degradation in accuracy. Crucially, this issue is exacerbated for edge deployment, which is the primary target for these models. In this work, we propose V-Rex, the first software-hardware co-designed accelerator that comprehensively addresses both algorithmic and hardware bottlenecks in streaming video LLM inference. At its core, V-Rex introduces ReSV, a training-free dynamic KV cache retrieval algorithm. ReSV exploits temporal and spatial similarity-based token clustering to reduce excessive KV cache memory across video frames. To fully realize these algorithmic benefits, V-Rex offers a compact, low-latency hardware accelerator with a dynamic KV cache retrieval engine (DRE), featuring bit-level and early-exit based computing units. V-Rex achieves unprecedented real-time of 3.9-8.3 FPS and energy-efficient streaming video LLM inference on edge deployment with negligible accuracy loss. While DRE only accounts for 2.2% power and 2.0% area, the system delivers 1.9-19.7x speedup and 3.1-18.5x energy efficiency improvements over AGX Orin GPU. This work is the first to comprehensively tackle KV cache retrieval across algorithms and hardware, enabling real-time streaming video LLM inference on resource-constrained edge devices.
翻译:流式视频大语言模型(LLMs)正日益广泛地应用于视频字幕生成、问答、对话代理和增强现实等实时多模态任务。然而,由于连续流式视频输入会导致其键值(KV)缓存显著增长,这些模型面临着根本性的内存与计算挑战。该过程需要一个迭代的预填充阶段,这是流式视频LLMs的独特特征。正是由于其迭代预填充阶段,该模型存在显著局限性,包括大量计算开销、可观的数据传输以及精度下降。至关重要的是,对于此类模型的主要部署目标——边缘端而言,该问题尤为突出。在本工作中,我们提出了V-Rex,这是首个通过软硬件协同设计全面解决流式视频LLM推理中算法与硬件瓶颈的加速器。其核心是引入了ReSV,一种无需训练的动态KV缓存检索算法。ReSV利用基于时空相似性的令牌聚类技术,以减少跨视频帧的冗余KV缓存内存。为充分发挥这些算法优势,V-Rex提供了一个紧凑、低延迟的硬件加速器,配备动态KV缓存检索引擎(DRE),该引擎包含基于比特级和早期退出的计算单元。V-Rex在边缘部署中实现了前所未有的实时性(3.9-8.3 FPS)和高能效的流式视频LLM推理,且精度损失可忽略不计。尽管DRE仅占2.2%的功耗和2.0%的面积,该系统相比AGX Orin GPU仍实现了1.9-19.7倍的加速和3.1-18.5倍的能效提升。本工作首次从算法与硬件层面系统性地解决了KV缓存检索问题,使得在资源受限的边缘设备上实现实时流式视频LLM推理成为可能。