Deep neural networks (DNNs) are vulnerable to adversarial attacks. In particular, object detectors may be attacked by applying a particular adversarial patch to the image. However, because the patch shrinks during preprocessing, most existing approaches that employ adversarial patches to attack object detectors would diminish the attack success rate on small and medium targets. This paper proposes a Frequency Module(FRAN), a frequency-domain attention module for guiding patch generation. This is the first study to introduce frequency domain attention to optimize the attack capabilities of adversarial patches. Our method increases the attack success rates of small and medium targets by 4.18% and 3.89%, respectively, over the state-of-the-art attack method for fooling the human detector while assaulting YOLOv3 without reducing the attack success rate of big targets.
翻译:深神经网络(DNN)很容易受到对抗性攻击。 特别是, 物体探测器可能会通过对图像应用特定的对抗性补丁来攻击。 但是,由于在预处理期间补丁会缩小, 使用对抗性补丁来攻击物体探测器的多数现有办法会降低中小目标的攻击成功率。 本文提议了一个频率模块( FRAN), 即一个引导补丁生成的频率- 持续关注模块。 这是第一次引入频率域注意来优化对抗性补丁的攻击能力的研究。 我们的方法将中小目标的攻击成功率分别提高4. 18% 和 3. 89%, 高于在不降低大目标攻击成功率的情况下愚弄人体探测器的最先进的攻击方法。