Multi-Agent Reinforcement Learning (MARL) has seen revolutionary breakthroughs with its successful application to multi-agent cooperative tasks such as robot swarms control, autonomous vehicle coordination, and computer games. In this paper, we propose Noisy-MAPPO, which achieves more than 90% winning rates in all StarCraft Multi-agent Challenge (SMAC) scenarios. First, we theoretically generalize Proximal Policy Optimization (PPO) to Multi-agent PPO (MAPPO) by lower bound of Trust Region Policy Optimization (TRPO). However, we find the shared advantage values in such MAPPO objective function may mislead the learning of some agents, which are not related to these advantage values, called The Policies Overfitting in Multi-agent Cooperation(POMAC). Therefore, we propose noise advatange-value methods (Noisy-MAPPO and Advantage-Noisy-MAPPO) to solve this problem. The experimental results show that our random noise method improves the performance of vanilla MAPPO by 80% in some Super-Hard scenarios in SMAC. We open-source the code at \url{https://github.com/hijkzzz/noisy-mappo}.
翻译:多机构强化学习(MARL)取得了革命性突破,成功地应用于多机构合作任务,如机器人群控、自动车辆协调和计算机游戏等。在本文中,我们建议诺西-马帕普(Noisy-MAPO)在所有StarCraft多剂挑战(SMAC)方案(SMAC)中实现超过90%的得分率。首先,我们理论上通过信任区域政策优化(TRPO)的较低约束,将普罗西马政策优化(PPPO)推广到多试剂PPPO(MAPO)。然而,我们发现,在这种MAPO目标功能中的共享优势值可能会误导某些与这些优势值无关的代理人的学习,称为“多机构合作(POMAC)中的过度政策 ” 。因此,我们提出了解决该问题的噪音adovatege值方法(Nosy-MAPO(PO)和Advantage-Noisy-MAPPO(PO) 。实验结果表明,我们随机噪音方法在SMAPO(VAMPO)的一些超级Hard情景中提高了80%的性表现。我们打开了代码的代码。