Experience replay is crucial for off-policy reinforcement learning (RL) methods. By remembering and reusing the experiences from past different policies, experience replay significantly improves the training efficiency and stability of RL algorithms. Many decision-making problems in practice naturally involve multiple agents and require multi-agent reinforcement learning (MARL) under centralized training decentralized execution paradigm. Nevertheless, existing MARL algorithms often adopt standard experience replay where the transitions are uniformly sampled regardless of their importance. Finding prioritized sampling weights that are optimized for MARL experience replay has yet to be explored. To this end, we propose MAC-PO, which formulates optimal prioritized experience replay for multi-agent problems as a regret minimization over the sampling weights of transitions. Such optimization is relaxed and solved using the Lagrangian multiplier approach to obtain the close-form optimal sampling weights. By minimizing the resulting policy regret, we can narrow the gap between the current policy and a nominal optimal policy, thus acquiring an improved prioritization scheme for multi-agent tasks. Our experimental results on Predator-Prey and StarCraft Multi-Agent Challenge environments demonstrate the effectiveness of our method, having a better ability to replay important transitions and outperforming other state-of-the-art baselines.
翻译:对政策外强化学习方法来说,经验重现至关重要。通过记忆和重新利用过去不同政策的经验,经验重现大大提高了RL算法的培训效率和稳定性。许多实际决策问题自然涉及多个代理人,需要根据集中培训分散执行模式进行多剂强化学习(MARL),然而,现有的MARL算法往往采用标准经验重现,在哪些方面过渡是统一抽样的,而不论其重要性如何。找到为MARL经验重现而优化的优先抽样权重尚待探索。为此,我们建议MAC-PO为多剂问题制定最佳优先重现经验,以尽量减少过渡的抽样权重。这种优化利用拉格朗加法放松和解决了多剂强化学习(MARL),以获得最接近的最佳采样权重。通过尽量减少由此产生的政策遗憾,我们可以缩小当前政策与名义最佳政策之间的差距,从而获得更好的多剂任务优先排序计划。我们关于预选和斯塔拉夫多试管挑战环境的实验结果,作为尽量减少过渡权重权重权重权重权重。这种优化的优化办法是利用拉格兰法的基线,展示我们方法的其他能力。</s>