Large language model (LLM) agents have shown increasing promise for collaborative task completion. However, existing multi-agent frameworks often rely on static workflows, fixed roles, and limited inter-agent communication, reducing their effectiveness in open-ended, high-complexity domains. This paper proposes a coordination framework that enables adaptiveness through three core mechanisms: dynamic task routing, bidirectional feedback, and parallel agent evaluation. The framework allows agents to reallocate tasks based on confidence and workload, exchange structured critiques to iteratively improve outputs, and crucially compete on high-ambiguity subtasks with evaluator-driven selection of the most suitable result. We instantiate these principles in a modular architecture and demonstrate substantial improvements in factual coverage, coherence, and efficiency over static and partially adaptive baselines. Our findings highlight the benefits of incorporating both adaptiveness and structured competition in multi-agent LLM systems.
翻译:大语言模型(LLM)智能体在协作完成任务方面展现出日益广阔的前景。然而,现有的多智能体框架通常依赖于静态工作流、固定角色和有限的智能体间通信,这降低了它们在开放域、高复杂度场景下的有效性。本文提出一种协调框架,通过三个核心机制实现自适应性:动态任务路由、双向反馈和并行智能体评估。该框架允许智能体基于置信度与工作负载重新分配任务,交换结构化评析以迭代改进输出,并关键性地在高模糊性子任务上展开竞争,通过评估器驱动的方式选择最合适的结果。我们将这些原则实例化为一个模块化架构,并在事实覆盖度、连贯性和效率方面相较于静态及部分自适应基线模型展现出显著提升。我们的研究结果突显了在多智能体大语言模型系统中同时引入自适应性与结构化竞争的优势。