In recent years we have seen fast progress on a number of benchmark problems in AI, with modern methods achieving near or super human performance in Go, Poker and Dota. One common aspect of all of these challenges is that they are by design adversarial or, technically speaking, zero-sum. In contrast to these settings, success in the real world commonly requires humans to collaborate and communicate with others, in settings that are, at least partially, cooperative. In the last year, the card game Hanabi has been established as a new benchmark environment for AI to fill this gap. In particular, Hanabi is interesting to humans since it is entirely focused on theory of mind, i.e., the ability to effectively reason over the intentions, beliefs and point of view of other agents when observing their actions. Learning to be informative when observed by others is an interesting challenge for Reinforcement Learning (RL): Fundamentally, RL requires agents to explore in order to discover good policies. However, when done naively, this randomness will inherently make their actions less informative to others during training. We present a new deep multi-agent RL method, the Simplified Action Decoder (SAD), which resolves this contradiction exploiting the centralized training phase. During training SAD allows other agents to not only observe the (exploratory) action chosen, but agents instead also observe the greedy action of their team mates. By combining this simple intuition with best practices for multi-agent learning, SAD establishes a new SOTA for learning methods for 2-5 players on the self-play part of the Hanabi challenge. Our ablations show the contributions of SAD compared with the best practice components. All of our code and trained agents are available at https://github.com/facebookresearch/Hanabi_SAD.
翻译:近年来,我们看到在AI的一些基准问题上取得了快速进展,现代方法在Go、Poker和Dota实现了接近或超人的业绩。所有挑战的一个常见方面是,它们的设计是对抗性的,或者从技术上讲是零和。与这些背景相反,现实世界的成功通常要求人类在至少部分合作的环境中与他人合作与交流。去年,汉娜比牌游戏被确定为AI填补这一差距的新基准环境。特别是,汉娜比对人类很有意思,因为它完全侧重于思想理论,即自身在观察行动时能够有效地解释其他代理人的意图、信仰和观点。与其他人员观察时学习信息是一个有趣的挑战:基本上,汉娜比牌游戏需要代理人探索以发现良好的政策。然而,如果做得天真,这种随机性将让其他人在培训期间的行为变得少一些信息。我们展示了一个新的深层次的RL方法,即自我定位的自我理论,即其他人员在观察行动的意图、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、信仰、学习学习学习、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、学习、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、运动、