Recent studies have shown that deep reinforcement learning agents are vulnerable to small adversarial perturbations on the agent's inputs, which raises concerns about deploying such agents in the real world. To address this issue, we propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against $l_p$-norm bounded adversarial attacks. Our framework is compatible with popular deep reinforcement learning algorithms and we demonstrate its performance with deep Q-learning, A3C and PPO. We experiment on three deep RL benchmarks (Atari, MuJoCo and ProcGen) to show the effectiveness of our robust training algorithm. Our RADIAL-RL agents consistently outperform prior methods when tested against attacks of varying strength and are more computationally efficient to train. In addition, we propose a new evaluation method called Greedy Worst-Case Reward (GWC) to measure attack agnostic robustness of deep RL agents. We show that GWC can be evaluated efficiently and is a good estimate of the reward under the worst possible sequence of adversarial attacks. All code used for our experiments is available at https://github.com/tuomaso/radial_rl_v2.
翻译:最近的研究显示,深入强化学习的代理商很容易受到该代理商投入的小规模对抗性干扰,这引起了人们对在现实世界中部署这类代理商的关切。为了解决这一问题,我们提议了RADIAL-RL,这是一个原则性框架,用于培训强化学习代理商,对美元以更强的力度对抗受人尊敬的对抗性对抗性攻击。我们的框架与广受欢迎的深度强化学习算法兼容,我们用深入的Q学习、A3C和PPO来展示其表现。我们试验了三个深度的RL基准(Atari、MuJoco和ProcGen),以显示我们强有力的培训算法的有效性。我们的RADIAL-RL代理商在受到不同强度的攻击时一贯地超越了先前的方法,而且在培训时在计算上效率更高。此外,我们提出了一个新的评价方法,即Greedy Oforst-Caseward(GWC),以测量深层RL代理商的神经性强性攻击。我们表明,GWC可以有效地评估,并且是对最差的对抗性攻击顺序下的奖励。我们实验所使用的所有代码都可在 http://gistrabs_comma.