Recent advances in multi-agent reinforcement learning (MARL) allow agents to coordinate their behaviors in complex environments. However, common MARL algorithms still suffer from scalability and sparse reward issues. One promising approach to resolving them is automatic curriculum learning (ACL). ACL involves a student (curriculum learner) training on tasks of increasing difficulty controlled by a teacher (curriculum generator). Despite its success, ACL's applicability is limited by (1) the lack of a general student framework for dealing with the varying number of agents across tasks and the sparse reward problem, and (2) the non-stationarity of the teacher's task due to ever-changing student strategies. As a remedy for ACL, we introduce a novel automatic curriculum learning framework, Skilled Population Curriculum (SPC), which adapts curriculum learning to multi-agent coordination. Specifically, we endow the student with population-invariant communication and a hierarchical skill set, allowing it to learn cooperation and behavior skills from distinct tasks with varying numbers of agents. In addition, we model the teacher as a contextual bandit conditioned by student policies, enabling a team of agents to change its size while still retaining previously acquired skills. We also analyze the inherent non-stationarity of this multi-agent automatic curriculum teaching problem and provide a corresponding regret bound. Empirical results show that our method improves the performance, scalability and sample efficiency in several MARL environments.
翻译:多试剂强化学习(MARL)最近的进展使代理商能够在复杂的环境中协调他们的行为。然而,普通的MAL算法仍然受到可缩放性和微弱的奖赏问题的影响。解决这些算法的一个有希望的方法是自动课程学习(ACL ) 。 ACL涉及一名学生(课程学习者)就教师(课程生成者)控制的日益困难的任务进行培训。尽管取得了成功,但ACL的适用性受到以下因素的限制:(1) 缺乏一个普通学生框架来处理不同任务中不同数目的代理商和微薄的奖赏问题,以及(2) 由于学生战略不断变化,教师的任务不固定。作为ACLL的补救办法,我们引入了一个新的自动课程学习框架,即熟练人口课程(课程学习者),使课程学习适应多剂协调。具体地说,我们让学生学习人口-差异性交流和等级技能组合,使其能够从不同的任务中学习合作和行为技能。此外,我们把教师作为背景强势学生政策的条件,使一个代理商团队能够改变课程的可变性,同时继续分析我们固有的学习的方法。