Stochastic games are a popular framework for studying multi-agent reinforcement learning (MARL). Recent advances in MARL have focused primarily on games with finitely many states. In this work, we study multi-agent learning in stochastic games with general state spaces and an information structure in which agents do not observe each other's actions. In this context, we propose a decentralized MARL algorithm and we prove the near-optimality of its policy updates. Furthermore, we study the global policy-updating dynamics for a general class of best-reply based algorithms and derive a closed-form characterization of convergence probabilities over the joint policy space.
翻译:随机博弈是研究多智能体强化学习(MARL)的一种流行框架。最近MARL的进展主要关注具有有限状态的游戏。在本文中,我们研究具有一般状态空间和信息结构的随机博弈中的多智能体学习,其中代理人不观察彼此的行动。在这个背景下,我们提出了一种分散式MARL算法,并证明了其策略更新的近最优性。此外,我们还研究了基于最佳应答的算法的全局策略更新动态,并导出了在整个政策空间中收敛概率的一个闭合形式描述。