Evaluating the worst-case performance of a reinforcement learning (RL) agent under the strongest/optimal adversarial perturbations on state observations (within some constraints) is crucial for understanding the robustness of RL agents. However, finding the optimal adversary is challenging, in terms of both whether we can find the optimal attack and how efficiently we can find it. Existing works on adversarial RL either use heuristics-based methods that may not find the strongest adversary, or directly train an RL-based adversary by treating the agent as a part of the environment, which can find the optimal adversary but may become intractable in a large state space. This paper introduces a novel attacking method to find the optimal attacks through collaboration between a designed function named "actor" and an RL-based learner named "director". The actor crafts state perturbations for a given policy perturbation direction, and the director learns to propose the best policy perturbation directions. Our proposed algorithm, PA-AD, is theoretically optimal and significantly more efficient than prior RL-based works in environments with large state spaces. Empirical results show that our proposed PA-AD universally outperforms state-of-the-art attacking methods in various Atari and MuJoCo environments. By applying PA-AD to adversarial training, we achieve state-of-the-art empirical robustness in multiple tasks under strong adversaries. The codebase is released at https://github.com/umd-huang-lab/paad_adv_rl.
翻译:评估强化学习(RL)代理在状态观察中最强/最优敌对扰动下的最坏情况表现(在某些约束条件下)对于理解RL代理的稳健性至关重要。然而,找到最优敌人既具有挑战性,无论是是否可以找到最优攻击,还是我们可以以多高效的方式找到它。现有关于对抗RL的作品要么使用启发式方法,可能无法找到最强敌人,要么将代理作为环境的一部分,直接训练基于RL的对手,这可以找到最优对手,但在大的状态空间中可能变得棘手。本文介绍了一种新的攻击方法,通过“演员”和“导演”之间的协作来找到最优攻击。演员为给定的策略扰动方向制造状态扰动,导演学习提出最佳策略扰动方向。我们提出的算法PA-AD在理论上是最优的,在状态空间大的环境中显着更有效。实证结果表明,我们提出的PA-AD在各种Atari和MuJoCo环境中普遍优于最先进的攻击方法。通过将PA-AD应用于敌对训练,我们在多个任务下在强敌对面取得了最先进的实证鲁棒性。代码库在https://github.com/umd-huang-lab/paad_adv_rl上发布。