This paper introduces an information-theoretic constraint on learned policy complexity in the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) reinforcement learning algorithm. Previous research with a related approach in continuous control experiments suggests that this method favors learning policies that are more robust to changing environment dynamics. The multi-agent game setting naturally requires this type of robustness, as other agents' policies change throughout learning, introducing a nonstationary environment. For this reason, recent methods in continual learning are compared to our approach, termed Capacity-Limited MADDPG. Results from experimentation in multi-agent cooperative and competitive tasks demonstrate that the capacity-limited approach is a good candidate for improving learning performance in these environments.
翻译:本文介绍了在多代理人深确定性政策强化学习算法(MADDPG)中,对所学政策复杂性的信息理论限制。以前在连续控制实验中采用相关方法进行的研究表明,这种方法有利于更能适应变化环境动态的学习政策。多试剂游戏环境自然需要这种稳健性,因为其他代理人在整个学习过程中改变了政策,引入了非静止环境。因此,最近不断学习的方法与我们的方法(即能力限制的MADDPG)进行了比较。多代理人合作和竞争任务实验的结果表明,能力限制方法是改善这些环境中学习业绩的良好选择。