This paper presents the first adversarial example based method for attacking human instance segmentation networks, namely person segmentation networks in short, which are harder to fool than classification networks. We propose a novel Fashion-Guided Adversarial Attack (FashionAdv) framework to automatically identify attackable regions in the target image to minimize the effect on image quality. It generates adversarial textures learned from fashion style images and then overlays them on the clothing regions in the original image to make all persons in the image invisible to person segmentation networks. The synthesized adversarial textures are inconspicuous and appear natural to the human eye. The effectiveness of the proposed method is enhanced by robustness training and by jointly attacking multiple components of the target network. Extensive experiments demonstrated the effectiveness of FashionAdv in terms of robustness to image manipulations and storage in cyberspace as well as appearing natural to the human eye. The code and data are publicly released on our project page https://github.com/nii-yamagishilab/fashion_adv
翻译:本文介绍了攻击人类实例分割网络的第一种对抗性实例方法,即短短的人分离网络,比分类网络更难愚弄。我们提议了一个新颖的时装引导反反向攻击(时装Adv)框架,在目标图像中自动识别攻击性区域,以尽量减少对图像质量的影响。它产生从时装图像中学习的对抗性纹理,然后在原始图像中将这些纹理覆盖在服装区域,使图像中的所有人看不见个人分割网络。合成的对抗性纹理不透明,在人类眼中看起来自然。通过强健培训和联合攻击目标网络的多个组成部分,提高了拟议方法的有效性。广泛的实验显示了FashignAdv在对图像操纵和存储在网络空间的效果以及人类眼中的自然效果。代码和数据在我们的项目网页http://github.com/ii-yamagishilab/fashion_adv上公开发布。