We develop a new model that can be applied to any perfect information two-player zero-sum game to target a high score, and thus a perfect play. We integrate this model into the Monte Carlo tree search-policy iteration learning pipeline introduced by Google DeepMind with AlphaGo. Training this model on 9x9 Go produces a superhuman Go player, thus proving that it is stable and robust. We show that this model can be used to effectively play with both positional and score handicap. We develop a family of agents that can target high scores against any opponent, and recover from very severe disadvantage against weak opponents. To the best of our knowledge, these are the first effective achievements in this direction.
翻译:我们开发了一个新的模型,可以应用于任何完美的信息双玩者零和游戏,以达到高分,从而实现完美的游戏。我们将这一模型纳入由谷歌DeepMind与阿尔法戈共同推出的蒙特卡洛树搜索政策迭代学习管道。在9x9Go上培训这一模型可以产生一个超人跳动玩家,从而证明它具有稳定性和强健性。我们展示了这个模型可以用来有效地同时使用定位和分数障碍。我们形成了一个可以针对任何对手的高分和从非常严重的劣势中恢复过来的代理家组成的大家庭。据我们所知,这些是朝着这个方向取得的第一个有效成就。