Large Language Models (LLMs) are increasingly being utilized as autonomous agents, yet their ability to coordinate in distributed systems remains poorly understood. We introduce \textbf{LoopBench}, a benchmark to evaluate LLM reasoning in distributed symmetry breaking and meta-cognitive thinking. The benchmark focuses on coloring odd cycle graphs ($C_3, C_5, C_{11}$) with limited colors, where deterministic, non-communicating agents fail in infinite loops. A strategy passing mechanism is implemented as a form of consistent memory. We show that while standard LLMs and classical heuristics struggle, advanced reasoning models (e.g., O3) devise strategies to escape deadlocks. LoopBench allows the study of emergent distributed algorithms based on language-based reasoning, offering a testbed for collective intelligence.
翻译:大型语言模型(LLMs)正日益被用作自主智能体,但其在分布式系统中的协调能力仍鲜为人知。我们提出了\\textbf{LoopBench},这是一个评估LLM在分布式对称破缺和元认知思维中推理能力的基准测试。该基准聚焦于使用有限颜色对奇数环图($C_3, C_5, C_{11}$)进行着色的问题,其中确定性的、非通信的智能体会陷入无限循环。我们引入了一种策略传递机制作为一致记忆的形式。研究表明,尽管标准LLM和经典启发式方法难以应对,但高级推理模型(如O3)能够设计出摆脱死锁的策略。LoopBench为研究基于语言推理的涌现分布式算法提供了平台,为集体智能研究提供了测试床。