In this work, we investigate the potential threat of adversarial examples to the security of face recognition systems. Although previous research has explored the adversarial risk to individual components of FRSs, our study presents an initial exploration of an adversary simultaneously fooling multiple components: the face detector and feature extractor in an FRS pipeline. We propose three multi-objective attacks on FRSs and demonstrate their effectiveness through a preliminary experimental analysis on a target system. Our attacks achieved up to 100% Attack Success Rates against both the face detector and feature extractor and were able to manipulate the face detection probability by up to 50% depending on the adversarial objective. This research identifies and examines novel attack vectors against FRSs and suggests possible ways to augment the robustness by leveraging the attack vector's knowledge during training of an FRS's components.
翻译:在这项工作中,我们调查了对抗示例对面部识别系统安全的潜在威胁。尽管之前的研究已经探究了对FRS各个组件的对抗风险,但我们的研究提出了一个对手同时欺骗FRS流水线中的面部检测器和特征提取器的多个组件的初步探究。我们提出了三种针对FRS的多目标攻击,并通过对目标系统的初步实验分析展示了它们的有效性。我们的攻击成功率达到了100%,并且能够根据对抗目标将面部检测概率操纵高达50%。这项研究确定并检查了针对FRS的新型攻击向量,并建议在训练FRS的组件时利用攻击向量的知识增强其健壮性。