We develop a new model that can be applied to any perfect information two-player zero-sum game to target a high score, and thus a perfect play. We integrate this model into the Monte Carlo tree search-policy iteration learning pipeline introduced by Google DeepMind with AlphaGo. Training this model on 9x9 Go produces a superhuman Go player, thus proving that it is stable and robust. We show that this model can be used to effectively play with both positional and score handicap, and to minimize suboptimal moves. We develop a family of agents that can target high scores against any opponent, and recover from very severe disadvantage against weak opponents. To the best of our knowledge, these are the first effective achievements in this direction.
翻译:我们开发了一个新的模型,可以应用于任何完美的信息双玩者零和游戏,以达到高分,从而实现完美的游戏。我们把这个模型融入了由谷歌DeepMind与阿尔法戈共同推出的蒙特卡洛树搜索政策迭代学习管道。在9x9Go上培训这个模型可以产生一个超人跳动玩家,从而证明它既稳定又稳健。我们展示了这个模型可以用来有效地发挥定位和分数障碍的作用,并最大限度地减少次优的动作。我们形成了一个可以针对任何对手的高分的代理家,可以从非常严重的劣势中恢复过来。据我们所知,这是在这方面取得的第一个有效成就。