Due to the broad range of applications of reinforcement learning (RL), understanding the effects of adversarial attacks against RL model is essential for the safe applications of this model. Prior theoretical works on adversarial attacks against RL mainly focus on either observation poisoning attacks or environment poisoning attacks. In this paper, we introduce a new class of attacks named action poisoning attacks, where an adversary can change the action signal selected by the agent. Compared with existing attack models, the attacker's ability in the proposed action poisoning attack model is more restricted, which brings some design challenges. We study the action poisoning attack in both white-box and black-box settings. We introduce an adaptive attack scheme called LCB-H, which works for most RL agents in the black-box setting. We prove that the LCB-H attack can force any efficient RL agent, whose dynamic regret scales sublinearly with the total number of steps taken, to choose actions according to a policy selected by the attacker very frequently, with only sublinear cost. In addition, we apply LCB-H attack against a popular model-free RL algorithm: UCB-H. We show that, even in the black-box setting, by spending only logarithm cost, the proposed LCB-H attack scheme can force the UCB-H agent to choose actions according to the policy selected by the attacker very frequently.
翻译:由于强化学习的广泛应用(RL),了解对RL模式的对抗性攻击的影响对于安全应用这一模型至关重要。以前关于对RL的对抗性攻击的理论工作主要侧重于观测中毒攻击或环境中毒攻击。在本文中,我们引入了一个新的攻击类别,称为行动中毒攻击,敌人可以改变代理人选择的行动信号。与现有的攻击模式相比,攻击者在拟议的行动中毒攻击模式中的能力更加有限,这带来了一些设计挑战。我们研究了白箱和黑箱环境中的行动中毒攻击。我们引入了一个称为LCB-H的适应性攻击计划,该计划在黑箱环境中大多数RL代理都起作用。我们证明,LCB-H的攻击可以迫使任何有效的RL代理(其动态遗憾规模与所采取的全部步骤相比),根据攻击者所选择的政策选择行动,只有次线成本。此外,我们用LCB-H攻击攻击对流行的模型算法:UCB-H。我们经常地显示,即使选择了黑箱计划,也只能用LCB-CB选择了黑箱计划来计算攻击成本。