The success of AlphaZero (AZ) has demonstrated that neural-network-based Go AIs can surpass human performance by a large margin. Given that the state space of Go is extremely large and a human player can play the game from any legal state, we ask whether adversarial states exist for Go AIs that may lead them to play surprisingly wrong actions. In this paper, we first extend the concept of adversarial examples to the game of Go: we generate perturbed states that are ``semantically'' equivalent to the original state by adding meaningless moves to the game, and an adversarial state is a perturbed state leading to an undoubtedly inferior action that is obvious even for Go beginners. However, searching the adversarial state is challenging due to the large, discrete, and non-differentiable search space. To tackle this challenge, we develop the first adversarial attack on Go AIs that can efficiently search for adversarial states by strategically reducing the search space. This method can also be extended to other board games such as NoGo. Experimentally, we show that the actions taken by both Policy-Value neural network (PV-NN) and Monte Carlo tree search (MCTS) can be misled by adding one or two meaningless stones; for example, on 58\% of the AlphaGo Zero self-play games, our method can make the widely used KataGo agent with 50 simulations of MCTS plays a losing action by adding two meaningless stones. We additionally evaluated the adversarial examples found by our algorithm with amateur human Go players and 90\% of examples indeed lead the Go agent to play an obviously inferior action. Our code is available at \url{https://PaperCode.cc/GoAttack}.
翻译:AlphaZero (AZ) 的成功表明, 以神经网络为基础的 Go AIs 可以大大超过人类的性能。 鉴于Go 国家空间极大, 人类玩家可以从任何合法国家玩游戏, 我们问Go AIs是否存在敌对状态, 这可能导致他们玩出令人惊讶的错误行动。 在本文中, 我们首先将对抗性攻击的概念扩大到 Go 游戏: 我们产生“ 模拟性地” 相当于原始状态, 给游戏增加毫无意义的动作, 而对抗性状态是一个过激的状态, 导致一个无疑的低级动作, 甚至连Go初者都可以看到。 然而, 搜索对抗性状态是挑战性的, 因为Go Go( PV-NN) 可能带来巨大的、 分散的和不可区别的搜索空间。 为了应对这一挑战, 我们开发了第一次对抗性攻击性攻击性攻击, 通过战略性地减少搜索空间来有效搜索敌国。 这个方法也可以扩展到其他棋盘游戏, 比如 NoGo。 实验性地, 我们发现, 两种政策-Val Goo 游戏中的行为都能够增加一个游戏中的游戏中的游戏中的游戏中的游戏中的游戏中的游戏中的游戏。