The security of the Person Re-identification(ReID) model plays a decisive role in the application of ReID. However, deep neural networks have been shown to be vulnerable, and adding undetectable adversarial perturbations to clean images can trick deep neural networks that perform well in clean images. We propose a ReID multi-modal data augmentation method with adversarial defense effect: 1) Grayscale Patch Replacement, it consists of Local Grayscale Patch Replacement(LGPR) and Global Grayscale Patch Replacement(GGPR). This method can not only improve the accuracy of the model, but also help the model defend against adversarial examples; 2) Multi-Modal Defense, it integrates three homogeneous modal images of visible, grayscale and sketch, and further strengthens the defense ability of the model. These methods fuse different modalities of homogeneous images to enrich the input sample variety, the variaty of samples will reduce the over-fitting of the ReID model to color variations and make the adversarial space of the dataset that the attack method can find difficult to align, thus the accuracy of model is improved, and the attack effect is greatly reduced. The more modal homogeneous images are fused, the stronger the defense capabilities is . The proposed method performs well on multiple datasets, and successfully defends the attack of MS-SSIM proposed by CVPR2020 against ReID [10], and increases the accuracy by 467 times(0.2% to 93.3%).The code is available at https://github.com/finger-monkey/ReID_Adversarial_Defense.
翻译:个人再识别(ReID) 模型的安全性在ReID的应用中起着决定性作用。 但是,深神经网络已被证明是脆弱的,而将不可检测的对抗性扰动性扰动添加到清洁图像中,可以诱使在清洁图像中表现良好的深神经网络。 我们提议了一种新的多模式数据增强方法,具有对抗性防御效应:1) 灰度补补丁,它由局部灰度补补丁(LGPR)和全球灰度补补补补(GGPR)组成。这种方法不仅可以提高模型的准确性,而且可以帮助模型防御对抗敌对实例;(2) 多模式防御,它将可见、灰度和草图中的三种同质模式图像整合起来,从而进一步加强模型的防御能力。这些方法结合了不同的同质图像模式,以丰富输入样本的种类,样品的腐蚀性将减少ReID模型的过度适应颜色变化,并使数据集的对抗空间变得难以调合,因此模型的准确性得到改进,攻击效果也大大降低。 更强的RA/Ramalimimim 图像通过提议的C 进行更强的防御。