Efficiently harnessing GPU compute is critical to improving user experience and reducing operational costs in large language model (LLM) services. However, current inference engine schedulers overlook the attention backend's sensitivity to request-length heterogeneity within a batch. As state-of-the-art models now support context windows exceeding 128K tokens, this once-tolerable inefficiency has escalated into a primary system bottleneck, causing severe performance degradation through GPU underutilization and increased latency. We present L4, a runtime system that dynamically reschedules requests across multiple instances serving the same LLM to mitigate per-instance length heterogeneity. L4 partitions these instances into length-specialized groups, each handling requests within a designated length range, naturally forming a pipeline as requests flow through them. L4 devises a dynamic programming algorithm to efficiently find the stage partition with the best QoE, employs runtime range refinement together with decentralized load (re)balance both across and within groups, achieving a balanced and efficient multi-instance service. Our evaluation shows that, under the same configuration, L4 reduces end-to-end latency by up to 67% and tail latency by up to 69%, while improving overall system throughput by up to 2.89 times compared to the state-of-the-art multi-instance scheduling systems.
翻译:高效利用GPU计算能力对于提升大型语言模型(LLM)服务的用户体验和降低运营成本至关重要。然而,当前推理引擎调度器忽视了注意力后端对批次内请求长度异质性的敏感性。随着最先进模型现已支持超过128K令牌的上下文窗口,这种曾经可容忍的低效问题已升级为系统主要瓶颈,导致GPU利用率不足和延迟增加,从而造成严重的性能下降。我们提出了L4,这是一个运行时系统,通过动态地在多个服务同一LLM的实例间重新调度请求,以减轻每个实例内的长度异质性。L4将这些实例划分为长度专业化组,每组处理指定长度范围内的请求,随着请求流经这些组,自然形成流水线。L4设计了一种动态规划算法来高效寻找具有最佳体验质量(QoE)的阶段划分,并结合运行时范围优化以及跨组和组内的去中心化负载(再)均衡,实现了一个均衡且高效的多实例服务。我们的评估表明,在相同配置下,与最先进的多实例调度系统相比,L4将端到端延迟降低了高达67%,尾部延迟降低了高达69%,同时将整体系统吞吐量提升了高达2.89倍。