Deep neural networks, particularly face recognition models, have been shown to be vulnerable to both digital and physical adversarial examples. However, existing adversarial examples against face recognition systems either lack transferability to black-box models, or fail to be implemented in practice. In this paper, we propose a unified adversarial face generation method - Adv-Makeup, which can realize imperceptible and transferable attack under black-box setting. Adv-Makeup develops a task-driven makeup generation method with the blending module to synthesize imperceptible eye shadow over the orbital region on faces. And to achieve transferability, Adv-Makeup implements a fine-grained meta-learning adversarial attack strategy to learn more general attack features from various models. Compared to existing techniques, sufficient visualization results demonstrate that Adv-Makeup is capable to generate much more imperceptible attacks under both digital and physical scenarios. Meanwhile, extensive quantitative experiments show that Adv-Makeup can significantly improve the attack success rate under black-box setting, even attacking commercial systems.
翻译:深神经网络,特别是面部识别模型,被证明容易受到数字和物理对抗性实例的影响,然而,现有的对抗面识别系统的对抗性实例要么不能转移到黑盒模型,要么没有实际实施。在本文件中,我们建议采用统一的对抗面生成方法Adv-Makeup,这种方法可以在黑盒设置下发现无法感知和可转移的攻击。Adv-Makeup开发了一种任务驱动的生成方法,结合混合模块,将不可察觉的眼影综合到轨道区域表面。为了实现可转移性,Adv-Mup采用精细的元学习对抗性攻击战略,从各种模型中学习更一般的攻击特征。与现有技术相比,充分的视觉化结果表明Adv-Makeup能够在数字和物理情景下产生更难以察觉的攻击。同时,广泛的量化实验表明,Adv-Mup可以显著地提高黑盒设置下的攻击成功率,甚至攻击商业系统。