We attack the state-of-the-art Go-playing AI system KataGo by training adversarial policies that play against frozen KataGo victims. Our attack achieves a >99% win rate when KataGo uses no tree search, and a >97% win rate when KataGo uses enough search to be superhuman. We train our adversaries with a modified KataGo implementation, using less than 14% of the compute used to train the original KataGo. Notably, our adversaries do not win by learning to play Go better than KataGo -- in fact, our adversaries are easily beaten by human amateurs. Instead, our adversaries win by tricking KataGo into making serious blunders. Our attack transfers zero-shot to other superhuman Go-playing AIs, and is interpretable to the extent that human experts can successfully implement it, without algorithmic assistance, to consistently beat superhuman AIs. Our results demonstrate that even superhuman AI systems may harbor surprising failure modes. Example games are available at https://goattack.far.ai/.
翻译:我们通过训练对抗冻僵的卡塔戈受害者的对抗性对抗政策,攻击最先进的Go游戏AI系统KataGo。我们的进攻在KataGo不进行树搜查时达到99 % 的赢率,在KataGo进行足够搜索以成为超人时达到97 %的赢率。我们用一个修改的卡塔戈执行方法来训练我们的对手,用不到14 %的计算法来训练原KataGo。值得注意的是,我们的对手没有通过学习比KataGo更优秀的游戏来赢得胜利。事实上,我们的对手很容易被人类业余爱好者击败。相反,我们的对手通过欺骗KataGo来赢得了99%的赢率。我们的进攻把零弹转移到了其他超人玩AI,并且可以被解释为人类专家在没有算法援助的情况下成功执行它,可以持续击败超人AI。我们的结果表明,即使是超人类的AI系统也可能避免令人惊讶的失败模式。在https://gofurction.far.ai/。