We propose a reinforcement learning algorithm for stationary mean-field games, where the goal is to learn a pair of mean-field state and stationary policy that constitutes the Nash equilibrium. When viewing the mean-field state and the policy as two players, we propose a fictitious play algorithm which alternatively updates the mean-field state and the policy via gradient-descent and proximal policy optimization, respectively. Our algorithm is in stark contrast with previous literature which solves each single-agent reinforcement learning problem induced by the iterates mean-field states to the optimum. Furthermore, we prove that our fictitious play algorithm converges to the Nash equilibrium at a sublinear rate. To the best of our knowledge, this seems the first provably convergent single-loop reinforcement learning algorithm for mean-field games based on iterative updates of both mean-field state and policy.
翻译:我们为固定平均场游戏建议了一个强化学习算法,目的是学习构成纳什均衡的两套平均场状态和固定地政策。当我们把平均场状态和政策看成是两个玩家时,我们提出一个假游戏算法,通过梯度-白度和近似度政策优化分别更新平均场状态和政策。我们的算法与以前的文献截然不同,这些文献解决了每个单一试剂强化学习问题,而前者是由列位平均场状态到最佳状态引发的。此外,我们证明我们的虚构游戏算法以亚线性速率与纳什平衡相融合。根据我们所知,这似乎是第一个基于均线状态和政策的迭接更新而成的普通场状态和政策的单圈强化学习算法。