Deep learning-based facial recognition (FR) models have demonstrated state-of-the-art performance in the past few years, even when wearing protective medical face masks became commonplace during the COVID-19 pandemic. Given the outstanding performance of these models, the machine learning research community has shown increasing interest in challenging their robustness. Initially, researchers presented adversarial attacks in the digital domain, and later the attacks were transferred to the physical domain. However, in many cases, attacks in the physical domain are conspicuous, requiring, for example, the placement of a sticker on the face, and thus may raise suspicion in real-world environments (e.g., airports). In this paper, we propose Adversarial Mask, a physical adversarial universal perturbation (UAP) against state-of-the-art FR models that is applied on face masks in the form of a carefully crafted pattern. In our experiments, we examined the transferability of our adversarial mask to a wide range of FR model architectures and datasets. In addition, we validated our adversarial mask effectiveness in real-world experiments by printing the adversarial pattern on a fabric medical face mask, causing the FR system to identify only 3.34% of the participants wearing the mask (compared to a minimum of 83.34% with other evaluated masks).
翻译:在过去几年里,深层学习的面部识别模型(FR)展示了近些年来最先进的表现,即使戴防护面罩在COVID-19大流行期间变得司空见惯。鉴于这些模型的杰出表现,机器学习研究界表现出越来越有兴趣挑战其稳健性。最初,研究人员在数字领域提出了对抗性攻击,后来攻击转移到了物理领域。然而,在许多情况下,物理领域的攻击十分明显,例如,需要贴上标签,从而可能在现实世界环境中(例如机场)引起怀疑。在本文中,我们提议使用反面面具,即一种物理对抗性全面渗透(UAP),来对抗以精心设计的方式在面部面具上应用的先进FR模型。在我们的实验中,我们研究了我们的对抗性面具是否可转移到广泛的FR模型结构和数据集。此外,我们通过在结构医学面罩上打印对抗性面罩时,我们验证了我们的对抗性面具的有效性。我们用只打印了真实世界的对抗性模型(例如机场),我们建议采用Aversarial 面具,这是一种有形的对抗性普遍渗透(UAP),即使用一种对抗性对抗性通用面罩,使参与者能够使用一种最起码的FRFRMMM3。