While the real world application of reinforcement learning (RL) is becoming popular, the safety concern and the robustness of an RL system require more attention. A recent work reveals that, in a multi-agent RL environment, backdoor trigger actions can be injected into a victim agent (a.k.a. trojan agent), which can result in a catastrophic failure as soon as it sees the backdoor trigger action. We propose the problem of RL Backdoor Detection, aiming to address this safety vulnerability. An interesting observation we drew from extensive empirical studies is a trigger smoothness property where normal actions similar to the backdoor trigger actions can also trigger low performance of the trojan agent. Inspired by this observation, we propose a reinforcement learning solution TrojanSeeker to find approximate trigger actions for the trojan agents, and further propose an efficient approach to mitigate the trojan agents based on machine unlearning. Experiments show that our approach can correctly distinguish and mitigate all the trojan agents across various types of agents and environments.
翻译:虽然实际应用强化学习(RL)正在普及,但安全关切和RL系统的稳健性需要更多关注。最近的一项工作表明,在多试剂RL环境中,后门触发行动可以注入受害者代理人(a.k.a.trojan 代理),一旦发现后门触发行动,就可能导致灾难性失败。我们提出了RL后门探测问题,目的是解决这种安全脆弱性。我们从广泛的经验研究中得出的一个有趣的观察是触发平稳的特性,在这种特性下,与后门触发行动类似的正常行动也可以触发特洛伊代理人的低性能。受这一观察的启发,我们提议了一个强化学习解决方案,即Trojan Seeker为trojan代理寻找近似触发行动,并进一步提出一种有效的办法,在机器不学习的基础上减少特洛伊制剂。实验表明,我们的方法可以正确区分和减轻不同类型代理人和环境的所有特洛伊代理人。