Experience replay is crucial for off-policy reinforcement learning (RL) methods. By remembering and reusing the experiences from past different policies, experience replay significantly improves the training efficiency and stability of RL algorithms. Many decision-making problems in practice naturally involve multiple agents and require multi-agent reinforcement learning (MARL) under centralized training decentralized execution paradigm. Nevertheless, existing MARL algorithms often adopt standard experience replay where the transitions are uniformly sampled regardless of their importance. Finding prioritized sampling weights that are optimized for MARL experience replay has yet to be explored. To this end, we propose \name, which formulates optimal prioritized experience replay for multi-agent problems as a regret minimization over the sampling weights of transitions. Such optimization is relaxed and solved using the Lagrangian multiplier approach to obtain the close-form optimal sampling weights. By minimizing the resulting policy regret, we can narrow the gap between the current policy and a nominal optimal policy, thus acquiring an improved prioritization scheme for multi-agent tasks. Our experimental results on Predator-Prey and StarCraft Multi-Agent Challenge environments demonstrate the effectiveness of our method, having a better ability to replay important transitions and outperforming other state-of-the-art baselines.
翻译:对政策外强化学习方法来说,经验重现至关重要。通过记忆和重新利用过去不同政策的经验,经验重现大大提高了RL算法的培训效率和稳定性。许多实际决策问题自然涉及多个代理人,要求根据集中培训分散执行模式进行多剂强化学习(MARL),然而,现有的MARL算法往往采用标准经验重现,在过渡统一抽样的地方,无论这些过渡的重要性如何。找到为MARL重现经验优化的优先抽样权重尚待探索。为此,我们提议了为多剂问题重新开发最佳优先经验的名称,作为对过渡抽样权重的最小化表示遗憾。这种优化是使用拉格朗加法的倍数法来放松和解决的,以获得近形最佳抽样权重。通过尽量减少由此产生的政策遗憾,我们可以缩小当前政策与名义最佳政策之间的差距,从而获得改进多剂任务优先排序计划。我们在Predator-Prey and StarCraft多剂挑战环境中的实验性结果,以最佳优先重现方法的基线展示了我们方法的更好转变能力。