Deep neural networks have developed rapidly and have achieved outstanding performance in several tasks, such as image classification and natural language processing. However, recent studies have indicated that both digital and physical adversarial examples can fool neural networks. Face-recognition systems are used in various applications that involve security threats from physical adversarial examples. Herein, we propose a physical adversarial attack with the use of full-face makeup. The presence of makeup on the human face is a reasonable possibility, which possibly increases the imperceptibility of attacks. In our attack framework, we combine the cycle-adversarial generative network (cycle-GAN) and a victimized classifier. The Cycle-GAN is used to generate adversarial makeup, and the architecture of the victimized classifier is VGG 16. Our experimental results show that our attack can effectively overcome manual errors in makeup application, such as color and position-related errors. We also demonstrate that the approaches used to train the models can influence physical attacks; the adversarial perturbations crafted from the pre-trained model are affected by the corresponding training data.
翻译:深神经网络发展迅速,在图像分类和自然语言处理等若干任务中取得了杰出的成绩,然而,最近的研究表明,数字和物理对抗性实例都可能愚弄神经网络。在涉及实物对抗性实例的安全威胁的各种应用中,使用了面部识别系统。在这里,我们提议使用全面化妆物进行身体对抗性攻击。在人脸上出现化妆是一种合理的可能性,这可能会增加攻击的可视性。在我们的攻击框架中,我们结合了循环对抗性基因网络(cycouro-GAN)和一个受害分类师。循环-GAN被用于生成对抗性化妆品,受害分类师的结构是VGG16。我们的实验结果表明,我们的攻击可以有效地克服化妆应用中的手动错误,例如颜色和位置错误。我们还表明,用于培训模型的方法可以影响实际攻击;从预先培训的模式中得出的对抗性干扰受相应培训数据的影响。