We study the problem of training a Reinforcement Learning (RL) agent that is collaborative with humans without using any human data. Although such agents can be obtained through self-play training, they can suffer significantly from distributional shift when paired with unencountered partners, such as humans. To mitigate this distributional shift, we propose Maximum Entropy Population-based training (MEP). In MEP, agents in the population are trained with our derived Population Entropy bonus to promote both pairwise diversity between agents and individual diversity of agents themselves, and a common best agent is trained by paring with agents in this diversified population via prioritized sampling. The prioritization is dynamically adjusted based on the training progress. We demonstrate the effectiveness of our method MEP, with comparison to Self-Play PPO (SP), Population-Based Training (PBT), Trajectory Diversity (TrajeDi), and Fictitious Co-Play (FCP) in the Overcooked game environment, with partners being human proxy models and real humans. A supplementary video showing experimental results is available at https://youtu.be/Xh-FKD0AAKE.
翻译:我们研究培训一个与人类合作的强化学习(RL)代理机构的问题,该代理机构不使用任何人类数据。虽然这种代理机构可以通过自我游戏培训获得,但是如果与人类等非对应伙伴配合,它们可能会因分布式转变而大大受到影响。为了减轻这种分布式转变,我们提议进行以人口为基础的最大渗透性培训。在MEP中,人口代理机构接受我们衍生的人口增益培训,以促进代理人与代理人本身的个体多样性,而一个共同的最佳代理机构则通过优先抽样与这一多样化人口的代理机构进行平等培训。根据培训进展动态调整了优先次序。我们展示了我们的MEP方法的有效性,与自玩 PPPO(SP)、基于人口的培训(PBT)、轨迹多样性(Trajitedroitority)(Trajectedidi)和过度游戏环境中的Fictititious Co-Play(FCP)相比,其伙伴是人类代用模型和真实人。一个补充视频显示实验结果,可在https://yoututu./X-FKAKAKA0查阅。