Most black-box adversarial attack schemes for object detectors mainly face two shortcomings: requiring access to the target model and generating inefficient adversarial examples (failing to make objects disappear in large numbers). To overcome these shortcomings, we propose a black-box adversarial attack scheme based on semantic segmentation and model inversion (SSMI). We first locate the position of the target object using semantic segmentation techniques. Next, we design a neighborhood background pixel replacement to replace the target region pixels with background pixels to ensure that the pixel modifications are not easily detected by human vision. Finally, we reconstruct a machine-recognizable example and use the mask matrix to select pixels in the reconstructed example to modify the benign image to generate an adversarial example. Detailed experimental results show that SSMI can generate efficient adversarial examples to evade human-eye perception and make objects of interest disappear. And more importantly, SSMI outperforms existing same kinds of attacks. The maximum increase in new and disappearing labels is 16%, and the maximum decrease in mAP metrics for object detection is 36%.
翻译:大多数物体探测器的黑盒对抗性攻击计划主要面临两个缺点:需要使用目标模型,并生成低效的对抗性例子(使物体大量消失);为克服这些缺点,我们提议基于语义分解和模型反转(SSMI)的黑盒对抗性攻击计划;我们首先使用语义分解技术来定位目标对象的位置。接下来,我们设计一个周边背景像素替换,用背景像素取代目标区域像素,以确保像素的修改不会轻易被人类的视觉探测到。最后,我们重建一个机器可辨认的样板,并使用遮罩矩阵在重建的样板中选择像素,以修改良性图像,产生一个对抗性例子。详细的实验结果显示,SSMI能够产生有效的对抗性例子,以避开人眼的感知,使感兴趣的对象消失。更重要的是,SSMI超越了现有相同的攻击类型。新标签和消失标签的最大增幅是16%,而用于物体探测的 mAP指标的最大减幅是36%。