Understanding decision-making in multi-AI-agent frameworks is crucial for analyzing strategic interactions in network-effect-driven contexts. This study investigates how AI agents navigate network-effect games, where individual payoffs depend on peer participatio--a context underexplored in multi-agent systems despite its real-world prevalence. We introduce a novel workflow design using large language model (LLM)-based agents in repeated decision-making scenarios, systematically manipulating price trajectories (fixed, ascending, descending, random) and network-effect strength. Our key findings include: First, without historical data, agents fail to infer equilibrium. Second, ordered historical sequences (e.g., escalating prices) enable partial convergence under weak network effects but strong effects trigger persistent "AI optimism"--agents overestimate participation despite contradictory evidence. Third, randomized history disrupts convergence entirely, demonstrating that temporal coherence in data shapes LLMs' reasoning, unlike humans. These results highlight a paradigm shift: in AI-mediated systems, equilibrium outcomes depend not just on incentives, but on how history is curated, which is impossible for human.
翻译:理解多AI智能体框架中的决策过程对于分析网络效应驱动情境下的策略互动至关重要。本研究探讨了AI智能体如何在网络效应博弈中进行决策,其中个体收益依赖于同伴参与——这一情境在现实世界中普遍存在,但在多智能体系统研究中尚未得到充分探索。我们引入了一种新颖的工作流设计,利用基于大语言模型(LLM)的智能体在重复决策场景中,系统性地操纵价格轨迹(固定、上升、下降、随机)和网络效应强度。主要发现包括:第一,在没有历史数据的情况下,智能体无法推断均衡状态。第二,有序的历史序列(例如逐步上升的价格)在弱网络效应下能实现部分收敛,但强网络效应会引发持续的“AI乐观主义”——智能体尽管面临矛盾证据,仍会高估参与程度。第三,随机化历史完全破坏了收敛过程,表明与人类不同,数据的时间连贯性塑造了LLM的推理方式。这些结果突显了一个范式转变:在AI介导的系统中,均衡结果不仅取决于激励,还取决于历史数据的构建方式,而这对人类而言是不可能的。