Multi-agent systems powered by large language models exhibit strong capabilities in collaborative problem-solving. However, these systems suffer from substantial knowledge redundancy. Agents duplicate efforts in retrieval and reasoning processes. This inefficiency stems from a deeper issue: current architectures lack mechanisms to ensure agents share minimal sufficient information at each operational stage. Empirical analysis reveals an average knowledge duplication rate of 47.3\% across agent communications. We propose D3MAS (Decompose, Deduce, and Distribute), a hierarchical coordination framework addressing redundancy through structural design rather than explicit optimization. The framework organizes collaboration across three coordinated layers. Task decomposition filters irrelevant sub-problems early. Collaborative reasoning captures complementary inference paths across agents. Distributed memory provides access to non-redundant knowledge. These layers coordinate through structured message passing in a unified heterogeneous graph. This cross-layer alignment ensures information remains aligned with actual task needs. Experiments on four challenging datasets show that D3MAS consistently improves reasoning accuracy by 8.7\% to 15.6\% and reduces knowledge redundancy by 46\% on average.
翻译:基于大语言模型的多智能体系统展现出强大的协同问题解决能力。然而,这些系统存在显著的知识冗余问题。智能体在检索与推理过程中重复执行相似任务,这种低效性源于更深层的架构缺陷:现有体系缺乏确保智能体在每个操作阶段共享最小充分信息的机制。实证分析表明,智能体通信中的平均知识重复率高达47.3%。本文提出D3MAS(分解、推理与分布式)分层协调框架,通过结构设计而非显式优化的方式解决冗余问题。该框架通过三个协调层组织协作:任务分解层在早期过滤无关子问题,协同推理层捕获跨智能体的互补推理路径,分布式记忆层提供非冗余知识访问。这些层级通过统一异质图中的结构化消息传递进行协调,跨层对齐机制确保信息始终与实际任务需求保持一致。在四个具有挑战性的数据集上的实验表明,D3MAS持续将推理准确率提升8.7%至15.6%,平均降低46%的知识冗余。