Face recognition (FR) systems have demonstrated outstanding verification performance, suggesting suitability for real-world applications ranging from photo tagging in social media to automated border control (ABC). In an advanced FR system with deep learning-based architecture, however, promoting the recognition efficiency alone is not sufficient, and the system should also withstand potential kinds of attacks designed to target its proficiency. Recent studies show that (deep) FR systems exhibit an intriguing vulnerability to imperceptible or perceptible but natural-looking adversarial input images that drive the model to incorrect output predictions. In this article, we present a comprehensive survey on adversarial attacks against FR systems and elaborate on the competence of new countermeasures against them. Further, we propose a taxonomy of existing attack and defense methods based on different criteria. We compare attack methods on the orientation and attributes and defense approaches on the category. Finally, we explore the challenges and potential research direction.
翻译:面部识别(Fr)系统显示了杰出的核查业绩,表明适合从社交媒体的照片标记到自动边境管制(ABC)等现实世界应用。 然而,在一个先进的基于深层学习结构的Fr系统,单靠促进认识效率是不够的,该系统还应能够承受针对其熟练程度的潜在攻击。最近的研究表明,(深度)Fr系统表现出一种令人感兴趣的易感或可感知但自然的对抗性输入图像,这种图像将模型推向不正确的输出预测。在本条中,我们对针对FR系统的对抗性攻击进行了全面调查,并阐述了针对这些系统采取新对策的能力。此外,我们建议根据不同标准对现有攻击和防御方法进行分类。我们比较关于该类别的定向、特性和防御方法的攻击方法。最后,我们探讨挑战及潜在的研究方向。