Neural network policies trained using Deep Reinforcement Learning (DRL) are well-known to be susceptible to adversarial attacks. In this paper, we consider attacks manifesting as perturbations in the observation space managed by the external environment. These attacks have been shown to downgrade policy performance significantly. We focus our attention on well-trained deterministic and stochastic neural network policies in the context of continuous control benchmarks subject to four well-studied observation space adversarial attacks. To defend against these attacks, we propose a novel defense strategy using a detect-and-denoise schema. Unlike previous adversarial training approaches that sample data in adversarial scenarios, our solution does not require sampling data in an environment under attack, thereby greatly reducing risk during training. Detailed experimental results show that our technique is comparable with state-of-the-art adversarial training approaches.
翻译:使用深强化学习(DRL)培训的神经网络政策众所周知,很容易受到对抗性攻击。在本文中,我们认为攻击在外部环境管理的观测空间中表现为干扰。这些攻击表明政策业绩显著下降。我们在持续控制基准的背景下,将注意力集中在训练有素的确定性和随机神经网络政策上,但需受四次经过认真研究的观测空间对抗性攻击。为了防范这些攻击,我们提议采用探测和锁定的策略来进行辩护。与以往的对抗性培训方法不同,即对立情景中的数据样本不要求受攻击环境中的抽样数据,从而大大降低培训期间的风险。详细的实验结果显示,我们的技术与最先进的对抗性培训方法相似。