The rapid emergence of diverse large language models (LLMs) has spurred the development of LLM routers that assign user queries to the most suitable model. However, existing LLM routers typically perform a single-round, one-to-one mapping (\textit{i.e.}, assigning each query to a single model in isolation), which limits their capability to tackle complex tasks that demand the complementary strengths of multiple LLMs. In this paper, we present \textbf{Router-R1}, a reinforcement learning (RL)-based framework that formulates multi-LLM routing and aggregation as a sequential decision process. Router-R1 instantiates the router itself as a capable LLM, leveraging its reasoning ability to interleave "think" actions (internal deliberation) with "route" actions (dynamic model invocation), and integrates each response into its evolving context. To facilitate learning, we employ a lightweight rule-based reward comprising format rewards, final outcome rewards, and a novel cost reward for optimizing the balance between performance and cost, opening a pathway toward enhancing performance-cost trade-offs via RL. Router-R1 also conditions only on simple model descriptors such as pricing, latency, and example performance, enabling strong generalization to unseen model selection. Experiments on seven general and multi-hop QA benchmarks show that Router-R1 outperforms several strong baselines, achieving superior performance while maintaining robust generalization and cost management.
翻译:随着多样化大语言模型(LLMs)的快速涌现,LLM路由器的开发应运而生,其目标是将用户查询分配给最合适的模型。然而,现有的LLM路由器通常执行单轮、一对一的映射(即,将每个查询单独分配给单一模型),这限制了其处理需要多个LLM互补优势的复杂任务的能力。本文提出\textbf{Router-R1},一个基于强化学习(RL)的框架,它将多LLM路由与聚合建模为一个序列决策过程。Router-R1将路由器本身实例化为一个能力强大的LLM,利用其推理能力在“思考”动作(内部推演)与“路由”动作(动态模型调用)之间交替进行,并将每个响应整合到其演化的上下文中。为了促进学习,我们采用了一个轻量级的基于规则的奖励函数,该函数包含格式奖励、最终结果奖励以及一个新颖的成本奖励,用于优化性能与成本之间的平衡,从而开辟了一条通过RL增强性能-成本权衡的路径。Router-R1仅依赖于简单的模型描述符(如定价、延迟和示例性能)进行条件判断,从而能够对未见过的模型选择展现出强大的泛化能力。在七个通用和多跳问答基准测试上的实验表明,Router-R1优于多个强基线方法,在保持稳健泛化能力和成本管理的同时,实现了更优的性能。