The integration of Large Language Models (LLMs) into applications ranging from interactive chatbots to multi-agent systems has introduced a wide spectrum of service-level objectives (SLOs) for responsiveness. These include latency-sensitive requests emphasizing per-token latency in streaming chat, deadline-sensitive requests requiring rapid full responses to trigger external tools, and compound requests with evolving dependencies across multiple LLM calls. Despite-or perhaps, because of-this workload diversity and unpredictable request information (e.g., response lengths and dependencies), existing request schedulers have focused on aggregate performance, unable to ensure application-level SLO needs. This paper presents JITServe, the first SLO-aware LLM serving system designed to maximize service goodput (e.g., the number of tokens meeting request SLOs) across diverse workloads. JITServe novelly schedules requests using imprecise request information and gradually relaxes this conservatism by refining request information estimates as generation progresses. It applies a grouped margin goodput maximization algorithm to allocate just enough serving bandwidth to satisfy each request's SLO just-in-time (JIT), maximizing residual capacity for others, while deciding the composition of requests in a batch to maximize efficiency and goodput with provable guarantees. Our evaluation across diverse realistic workloads, including chat, deep research, and agentic pipelines, shows that JITServe improves service goodput by 1.4x-6.3x, alternatively achieving 28.5%-83.2% resource savings, compared to state-of-the-art designs.
翻译:将大语言模型(LLMs)集成到从交互式聊天机器人到多智能体系统的各类应用中,催生了响应性方面多样化的服务级别目标(SLOs)。这包括流式聊天中强调单令牌延迟的延迟敏感型请求、需要快速完整响应以触发外部工具的截止时间敏感型请求,以及在多次LLM调用间具有动态依赖关系的复合请求。尽管——或者说正是因为——这种工作负载的多样性以及请求信息(如响应长度和依赖关系)的不可预测性,现有的请求调度器主要关注聚合性能,无法确保应用层面的SLO需求。本文提出JITServe,首个旨在最大化多样化工作负载下服务有效吞吐量(例如,满足请求SLOs的令牌数量)的SLO感知LLM服务系统。JITServe创新性地利用不精确的请求信息进行请求调度,并随着生成的推进逐步细化请求信息估计值,从而渐进式放宽这种保守性。它采用分组边际有效吞吐量最大化算法,为每个请求的SLO即时(JIT)分配恰好足够的服务带宽,同时最大化其他请求的剩余容量,并通过决策批次中请求的组合,在可证明的保证下最大化效率和有效吞吐量。我们在包括聊天、深度研究和智能体流程在内的多样化现实工作负载上进行评估,结果表明,与最先进的设计相比,JITServe将服务有效吞吐量提高了1.4倍至6.3倍,或者实现了28.5%至83.2%的资源节省。