We study a multi-agent reinforcement learning (MARL) problem where the agents interact over a given network. The goal of the agents is to cooperatively maximize the average of their entropy-regularized long-term rewards. To overcome the curse of dimensionality and to reduce communication, we propose a Localized Policy Iteration (LPI) algorithm that provably learns a near-globally-optimal policy using only local information. In particular, we show that, despite restricting each agent's attention to only its $\kappa$-hop neighborhood, the agents are able to learn a policy with an optimality gap that decays polynomially in $\kappa$. In addition, we show the finite-sample convergence of LPI to the global optimal policy, which explicitly captures the trade-off between optimality and computational complexity in choosing $\kappa$. Numerical simulations demonstrate the effectiveness of LPI.
翻译:我们研究多试剂强化学习(MARL)问题,即代理商在特定网络上互动。代理商的目标是合作最大限度地提高其英特兰-正规化长期回报的平均值。为了克服维度的诅咒并减少沟通,我们提议了一个本地化政策循环(LPI)算法,该算法可以仅仅使用本地信息来学习近乎全球最佳的政策。特别是,我们表明,尽管将每个代理商的注意力限制在它的$\kappa$-hop邻里,但代理商能够学习一种政策,其最佳性差距以$\kappa$在多边间以多元形式衰减。此外,我们展示了液化政策与全球最佳政策的有限模范趋同,明确反映了最佳性和计算复杂性在选择$\kappa$上的权衡。数字模拟显示了液化模型的有效性。