Large language models (LLMs) have shown strong reasoning and coding capabilities, yet they struggle to generalize to real-world software engineering (SWE) problems that are long-horizon and out of distribution. Existing systems often rely on a single agent to handle the entire workflow-interpreting issues, navigating large codebases, and implementing fixes-within one reasoning chain. Such monolithic designs force the model to retain irrelevant context, leading to spurious correlations and poor generalization. Motivated by how human engineers decompose complex problems, we propose structuring SWE agents as orchestrators coordinating specialized sub-agents for sub-tasks such as localization, editing, and validation. The challenge lies in discovering effective hierarchies automatically: as the number of sub-agents grows, the search space becomes combinatorial, and it is difficult to attribute credit to individual sub-agents within a team. We address these challenges by formulating hierarchy discovery as a multi-armed bandit (MAB) problem, where each arm represents a candidate sub-agent and the reward measures its helpfulness when collaborating with others. This framework, termed Bandit Optimization for Agent Design (BOAD), enables efficient exploration of sub-agent designs under limited evaluation budgets. On SWE-bench-Verified, BOAD outperforms single-agent and manually designed multi-agent systems. On SWE-bench-Live, featuring more recent and out-of-distribution issues, our 36B system ranks second on the leaderboard at the time of evaluation, surpassing larger models such as GPT-4 and Claude. These results demonstrate that automatically discovered hierarchical multi-agent systems significantly improve generalization on challenging long-horizon SWE tasks. Code is available at https://github.com/iamxjy/BOAD-SWE-Agent.
翻译:大型语言模型(LLMs)已展现出强大的推理与代码生成能力,但在处理真实世界软件工程(SWE)中长周期且分布外的问题时仍存在泛化困难。现有系统通常依赖单一智能体在单一推理链中处理完整工作流——包括问题解析、大型代码库导航与修复实现。此类单体式设计迫使模型保留无关上下文,导致虚假关联与泛化能力下降。受人类工程师分解复杂问题的方式启发,我们提出将SWE智能体构建为协调专用子智能体的编排器,分别负责定位、编辑与验证等子任务。核心挑战在于自动发现有效的层次结构:随着子智能体数量增加,搜索空间呈组合爆炸增长,且难以在团队中为个体子智能体分配贡献度。我们将层次发现问题建模为多臂赌博机(MAB)问题,其中每个臂代表候选子智能体,奖励函数衡量其与其他智能体协作时的有效性。该框架称为“智能体设计的赌博机优化”(BOAD),可在有限评估预算下高效探索子智能体设计。在SWE-bench-Verified基准上,BOAD优于单智能体与人工设计的多智能体系统。在包含更新且分布外问题的SWE-bench-Live基准上,我们的36B参数系统在评估时位列排行榜第二,超越了GPT-4与Claude等更大规模模型。这些结果表明,自动发现的层次化多智能体系统能显著提升对挑战性长周期SWE任务的泛化能力。代码发布于 https://github.com/iamxjy/BOAD-SWE-Agent。