We propose an approach for adversarial attacks on dense prediction models (such as object detectors and segmentation). It is well known that the attacks generated by a single surrogate model do not transfer to arbitrary (blackbox) victim models. Furthermore, targeted attacks are often more challenging than the untargeted attacks. In this paper, we show that a carefully designed ensemble can create effective attacks for a number of victim models. In particular, we show that normalization of the weights for individual models plays a critical role in the success of the attacks. We then demonstrate that by adjusting the weights of the ensemble according to the victim model can further improve the performance of the attacks. We performed a number of experiments for object detectors and segmentation to highlight the significance of the our proposed methods. Our proposed ensemble-based method outperforms existing blackbox attack methods for object detection and segmentation. Finally we show that our proposed method can also generate a single perturbation that can fool multiple blackbox detection and segmentation models simultaneously. Code is available at https://github.com/CSIPlab/EBAD.
翻译:我们提出一种针对密集预测模型(如物体检测器和分割器)的对抗攻击方法。众所周知,由单个替代模型生成的攻击并不转移到任意(黑盒)受害者模型。此外,定向攻击通常比非定向攻击更具挑战性。在本文中,我们展示了一个精心设计的集合体可以为许多受害者模型创建有效的攻击。特别地,我们表明,个别模型的权重归一化对攻击的成功至关重要。然后,我们展示了根据受害者模型调整合奏的权重可以进一步提高攻击的性能。我们进行了一系列物体检测和分割实验,以突出我们提出的方法的重要性。我们提出的集合方法在物体检测和分割的黑盒攻击方面优于现有的方法。最后,我们展示我们提出的方法还可以生成一个单个扰动,可以同时欺骗多个黑盒检测和分割模型。代码可从https://github.com/CSIPlab/EBAD获得。