Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: subtle perturbation can completely change prediction result. The vulnerability has led to a surge of research in this direction, including adversarial attacks on object detection networks. However, previous studies are dedicated to attacking anchor-based object detectors. In this paper, we present the first adversarial attack on anchor-free object detectors. It conducts category-wise, instead of previously instance-wise, attacks on object detectors, and leverages high-level semantic information to efficiently generate transferable adversarial examples, which can also be transferred to attack other object detectors, even anchor-based detectors such as Faster R-CNN. Experimental results on two benchmark datasets demonstrate that our proposed method achieves state-of-the-art performance and transferability.
翻译:事实证明,深神经网络很容易受到对抗性攻击:微妙的扰动可以完全改变预测结果;这种脆弱性导致朝这个方向的研究激增,包括对物体探测网络的对抗性攻击;然而,以前的研究专门用来攻击锚基物体探测器;在本文件中,我们介绍了对无锚物体探测器的第一次对抗性攻击;对物体探测器的攻击不是以往的事例,而是从类别上,对物体探测器的攻击,并利用高级语义信息来有效地产生可转移的对抗性例子,这些例子也可以转用于攻击其他物体探测器,甚至包括锚基探测器,例如快速R-CNN。 两项基准数据集的实验结果表明,我们拟议的方法达到了最先进的性能和可转移性。