Decentralized policy optimization has been commonly used in cooperative multi-agent tasks. However, since all agents are updating their policies simultaneously, from the perspective of individual agents, the environment is non-stationary, resulting in it being hard to guarantee monotonic policy improvement. To help the policy improvement be stable and monotonic, we propose model-based decentralized policy optimization (MDPO), which incorporates a latent variable function to help construct the transition and reward function from an individual perspective. We theoretically analyze that the policy optimization of MDPO is more stable than model-free decentralized policy optimization. Moreover, due to non-stationarity, the latent variable function is varying and hard to be modeled. We further propose a latent variable prediction method to reduce the error of the latent variable function, which theoretically contributes to the monotonic policy improvement. Empirically, MDPO can indeed obtain superior performance than model-free decentralized policy optimization in a variety of cooperative multi-agent tasks.
翻译:分散化政策优化通常用于多代理人合作的任务,但是,由于所有代理机构同时更新其政策,从个别代理机构的角度来说,环境是非静止的,因此难以保证单调政策改进。为了帮助政策改进稳定,我们提议基于模式的分散化政策优化(MDPO),它包含一个潜在变量功能,帮助从个人角度建立过渡和奖励功能。我们理论上分析,多代机构政策优化比不设模式的分散化政策优化更稳定。此外,由于不固定性,潜在变量功能是不同和难以建模的。我们进一步提出了一种潜在变量预测方法,以减少潜在变量功能的错误,这种潜在变量功能理论上有助于单调政策改进。在多种合作性多试剂任务中,多代机构政策优化确实比不设模式分散化政策优化更优秀。