There has been an ongoing cycle where stronger defenses against adversarial attacks are subsequently broken by a more advanced defense-aware attack. We present a new approach towards ending this cycle where we "deflect'' adversarial attacks by causing the attacker to produce an input that semantically resembles the attack's target class. To this end, we first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance on both standard and defense-aware attacks. We then show that undetected attacks against our defense often perceptually resemble the adversarial target class by performing a human study where participants are asked to label images produced by the attack. These attack images can no longer be called "adversarial'' because our network classifies them the same way as humans do.
翻译:在一个持续的循环中,对对抗性攻击的更强有力的防御随后被更先进的防御意识攻击所打破。 我们提出了一个新的方法来结束这一循环,即我们通过让攻击者产生一个与攻击目标类别相似的语义输入来“破坏'的对抗性攻击 ” 。 为此,我们首先提议以卡普苏尔网络为基础,以三个探测机制为基地,在防御意识攻击中,实现最先进的探测性能。然后我们通过进行人类研究来显示,对我们的防御性攻击的未察觉性攻击往往类似于对抗性攻击目标类别,要求参与者标注攻击产生的图像。这些攻击性图像不能再被称为“对抗性”了,因为我们的网络把它们与人类一样分类。