Softmax policy gradient is a popular algorithm for policy optimization in single-agent reinforcement learning, particularly since projection is not needed for each gradient update. However, in multi-agent systems, the lack of central coordination introduces significant additional difficulties in the convergence analysis. Even for a stochastic game with identical interest, there can be multiple Nash Equilibria (NEs), which disables proof techniques that rely on the existence of a unique global optimum. Moreover, the softmax parameterization introduces non-NE policies with zero gradient, making NE-seeking difficult for gradient-based algorithms. In this paper, we study the finite time convergence of decentralized softmax gradient play in a special form of game, Markov Potential Games (MPGs), which includes the identical interest game as a special case. We investigate both gradient play and natural gradient play, with and without $\log$-barrier regularization. Establishing convergence for the unregularized cases relies on an assumption that the stationary policies are isolated, and yields convergence bounds that contain a trajectory dependent constant that can be arbitrarily large. We introduce the $\log$-barrier regularization to overcome these drawbacks, with the cost of slightly worse dependence on other factors such as the action set size. An empirical study on an identical interest matrix game confirms the theoretical findings.
翻译:软负政策梯度是单一试剂强化学习中政策优化流行的算法,特别是因为每个梯度更新都不需要预测。然而,在多试剂系统中,中央协调的缺乏在趋同分析中带来了更多的重大困难。即使具有相同兴趣的随机游戏,也可能有多重Nash Equimax 政策梯度(Nes),它使依赖独特全球最佳的证明技术无法发挥作用。此外,软负参数化引入了零梯度的非NE政策,使NE难以追求的梯度算法。在本文中,我们研究一种特殊的游戏形式,即Markov 潜在游戏(MPGs),包括相同的利息游戏。我们调查梯度游戏和自然梯度游戏,同时研究并且不使用$/log$-brecker的规范。为不正规化案件建立趋同的假设,即固定政策是孤立的,产值趋同带轨常数的基线,可以任意地大。我们用美元/loglexergregal 来研究分散的软模梯度梯度梯度游戏固定,以克服这些相同的理论底图研究成本。