Backdoor attacks on reinforcement learning implant a backdoor in a victim agent's policy. Once the victim observes the trigger signal, it will switch to the abnormal mode and fail its task. Most of the attacks assume the adversary can arbitrarily modify the victim's observations, which may not be practical. One work proposes to let one adversary agent use its actions to affect its opponent in two-agent competitive games, so that the opponent quickly fails after observing certain trigger actions. However, in multiagent collaborative systems, agents may not always be able to observe others. When and how much the adversary agent can affect others are uncertain, and we want the adversary agent to trigger others for as few times as possible. To solve this problem, we first design a novel training framework to produce auxiliary rewards that measure the extent to which the other agents'observations being affected. Then we use the auxiliary rewards to train a trigger policy which enables the adversary agent to efficiently affect the others' observations. Given these affected observations, we further train the other agents to perform abnormally. Extensive experiments demonstrate that the proposed method enables the adversary agent to lure the others into the abnormal mode with only a few actions.
翻译:在受害者代理人的政策中,当受害者看到触发信号后,就会切换到异常模式,使其无法完成任务。大多数攻击假设对手可以任意修改受害人的观察结果,这也许不切实际。一项工作提议让一个对手利用其行动在双试竞争游戏中影响其对手,从而使对手在观察某些触发行动后迅速失败。然而,在多试剂合作系统中,代理人可能并不总是能够观察他人。当和在多大程度上对手代理人能够影响他人时,它就会改变为不确定状态,我们希望对手代理人能够触发其他人,尽可能少地触发别人。为了解决这个问题,我们首先设计一个新的培训框架,以产生辅助性奖励,衡量其他代理人观察受影响的程度。然后我们用辅助性奖励来训练触发政策,使对手代理人能够有效地影响其他人的观察结果。鉴于这些受到影响的观察,我们进一步培训其他代理人以异常的方式执行。广泛的实验表明,拟议的方法使对手代理人能够将其他人引入不正常的状态,只有少数行动。