In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse NLP tasks. Extensive research has explored how to enhance the logical reasoning abilities such as Chain-of-Thought, Chain-of-Thought with Self-Consistency, Tree-Of-Thoughts, and multi-agent debates. In the context of multi-agent debates, significant performance improvements can be achieved with an increasing number of agents and debate rounds. However, the escalation in the number of agents and debate rounds can drastically raise the tokens cost of debates, thereby limiting the scalability of the multi-agent debate technique. To better harness the advantages of multi-agent debates in logical reasoning tasks, this paper proposes a method to significantly reduce token cost in multi-agent debates. This approach involves dividing all agents into multiple debate groups, with agents engaging in debates within their respective groups and sharing interim debate results between groups. Comparative experiments across multiple datasets have demonstrated that this method can reduce the total tokens by up to 51.7% during debates and while potentially enhancing accuracy by as much as 25%. Our method significantly enhances the performance and efficiency of interactions in the multi-agent debate.
翻译:近年来,大型语言模型(LLMs)在各类自然语言处理任务中展现出卓越能力。大量研究探索了如何提升逻辑推理能力,例如思维链、自洽性思维链、思维树以及多智能体辩论。在多智能体辩论场景中,增加智能体数量和辩论轮次可显著提升性能表现。然而,智能体数量和辩论轮次的增加会急剧提高辩论的令牌消耗成本,从而限制多智能体辩论技术的可扩展性。为更好地发挥多智能体辩论在逻辑推理任务中的优势,本文提出一种显著降低多智能体辩论令牌消耗的方法。该方法将所有智能体划分为多个辩论小组,智能体在各组内进行辩论,并在组间共享阶段性辩论结果。跨多个数据集的对比实验表明,该方法能在辩论过程中将总令牌消耗降低最高达51.7%,同时可能将准确率提升最高达25%。我们的方法显著增强了多智能体辩论中交互的性能与效率。