Deep reinforcement learning models are vulnerable to adversarial attacks that can decrease a victim's cumulative expected reward by manipulating the victim's observations. Despite the efficiency of previous optimization-based methods for generating adversarial noise in supervised learning, such methods might not be able to achieve the lowest cumulative reward since they do not explore the environmental dynamics in general. In this paper, we provide a framework to better understand the existing methods by reformulating the problem of adversarial attacks on reinforcement learning in the function space. Our reformulation generates an optimal adversary in the function space of the targeted attacks, repelling them via a generic two-stage framework. In the first stage, we train a deceptive policy by hacking the environment, and discover a set of trajectories routing to the lowest reward or the worst-case performance. Next, the adversary misleads the victim to imitate the deceptive policy by perturbing the observations. Compared to existing approaches, we theoretically show that our adversary is stronger under an appropriate noise level. Extensive experiments demonstrate our method's superiority in terms of efficiency and effectiveness, achieving the state-of-the-art performance in both Atari and MuJoCo environments.
翻译:深入强化学习模式容易受到对抗性攻击,这种攻击会通过操纵受害人的观察而减少受害人预期的累积报酬。尽管以往的优化方法在受监督的学习中产生对抗性噪音的效率较高,但这类方法可能无法达到最低的累积奖赏,因为它们没有全面探讨环境动态。在本文件中,我们提供了一个框架,以便通过重新阐述对功能空间强化学习的对抗性攻击问题来更好地了解现有方法。我们的重新定位在定向攻击的功能空间中产生最佳对手,通过一般的两阶段框架将其击退。在第一阶段,我们通过黑入环境来培训欺骗性政策,并发现一套走向最低奖赏或最坏业绩的轨迹。接下来,对手误导受害人通过干扰观察来模仿欺骗性政策。与现有方法相比,我们理论上表明我们的对手在适当的噪音水平下更强大。广泛的实验表明,我们的方法在效率和效力方面具有优势,在阿塔里和武库两地环境中都取得了最先进的表现。