Face recognition has been greatly facilitated by the development of deep neural networks (DNNs) and has been widely applied to many safety-critical applications. However, recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition. In this work, we study sticker-based physical attacks on face recognition for better understanding its adversarial robustness. To this end, we first analyze in-depth the complicated physical-world conditions confronted by attacking face recognition, including the different variations of stickers, faces, and environmental conditions. Then, we propose a novel robust physical attack framework, dubbed PadvFace, to model these challenging variations specifically. Furthermore, considering the difference in attack complexity, we propose an efficient Curriculum Adversarial Attack (CAA) algorithm that gradually adapts adversarial stickers to environmental variations from easy to complex. Finally, we construct a standardized testing protocol to facilitate the fair evaluation of physical attacks on face recognition, and extensive experiments on both dodging and impersonation attacks demonstrate the superior performance of the proposed method.
翻译:深层神经网络的发展极大地促进了面部识别,并广泛应用于许多安全关键应用。然而,最近的研究表明,DNN极易受到对抗性实例的影响,引起了对现实世界表面识别安全的严重关切。在这项工作中,我们研究了对面部识别的基于贴纸的人身攻击,以更好地了解其对抗性强力。为此,我们首先深入分析了攻击面部识别所面临的复杂的物理世界条件,包括标签、面部和环境条件的不同变化。然后,我们提议了一个称为PadvFace的新型强健物理攻击框架,以具体模拟这些具有挑战性的变异。此外,考虑到攻击性复杂性的差异,我们提出了高效的Aversarial攻击课程(CAAA)算法,该算法将对抗性粘贴剂逐渐适应环境变化,从容易到复杂。最后,我们制定了标准化测试程序,以便利对面部识别的人身攻击进行公平评价,并对登顶和冒面攻击进行广泛的实验,显示了拟议方法的优异性表现。