This work proposes a scheme that allows learning complex multi-agent behaviors in a sample efficient manner, applied to 2v2 soccer. The problem is formulated as a Markov game, and solved using deep reinforcement learning. We propose a basic multi-agent extension of TD3 for learning the policy of each player, in a decentralized manner. To ease learning, the task of 2v2 soccer is divided in three stages: 1v0, 1v1 and 2v2. The process of learning in multi-agent stages (1v1 and 2v2) uses agents trained on a previous stage as fixed opponents. In addition, we propose using experience sharing, a method that shares experience from a fixed opponent, trained in a previous stage, for training the agent currently learning, and a form of frame-skipping, to raise performance significantly. Our results show that high quality soccer play can be obtained with our approach in just under 40M interactions. A summarized video of the resulting game play can be found in https://youtu.be/f25l1j1U9RM.
翻译:这项工作提出了一个计划,允许以抽样有效的方式学习复杂的多试剂行为,适用于2v2足球。问题被设计成Markov游戏,通过深层强化学习加以解决。我们建议以分散方式将TD3基本多试展用于学习每个玩家的政策。为了方便学习,2v2足球的任务分为三个阶段:1v0、1v1和2v2.多试剂阶段的学习过程(1v1和2v2)利用在前一个阶段受过训练的代理人作为固定反对者。此外,我们提议使用经验分享方法,即分享在前一个阶段受过训练的固定对手的经验的方法,用于培训目前学习的代理人,并采用框架跳板形式来大大提高业绩。我们的结果显示,在不到40M的互动中,我们的方法可以取得高质量的足球比赛。由此产生的游戏的概要视频可见https://youtu.be/f25l1U9RM。