Face anti-spoofing aims to discriminate the spoofing face images (e.g., printed photos) from live ones. However, adversarial examples greatly challenge its credibility, where adding some perturbation noise can easily change the predictions. Previous works conducted adversarial attack methods to evaluate the face anti-spoofing performance without any fine-grained analysis that which model architecture or auxiliary feature is vulnerable to the adversary. To handle this problem, we propose a novel framework to expose the fine-grained adversarial vulnerability of the face anti-spoofing models, which consists of a multitask module and a semantic feature augmentation (SFA) module. The multitask module can obtain different semantic features for further evaluation, but only attacking these semantic features fails to reflect the discrimination-related vulnerability. We then design the SFA module to introduce the data distribution prior for more discrimination-related gradient directions for generating adversarial examples. Comprehensive experiments show that SFA module increases the attack success rate by nearly 40$\%$ on average. We conduct this fine-grained adversarial analysis on different annotations, geometric maps, and backbone networks (e.g., Resnet network). These fine-grained adversarial examples can be used for selecting robust backbone networks and auxiliary features. They also can be used for adversarial training, which makes it practical to further improve the accuracy and robustness of the face anti-spoofing models.
翻译:然而,对抗性实例对其可信度提出了极大的挑战,因为其中添加了一些扰动噪音很容易改变预测。先前的工程进行了对抗性攻击方法,以评估面部反嘲笑性表现,而没有经过任何细微的细微分析,即模型结构或辅助特征易受对手伤害。为了处理这一问题,我们提议了一个新框架,以揭露面部反嘲笑模型(如印刷照片)的细微辨别的对抗性脆弱性,该模型包括多任务模块和语义特征增强模块。多任务模块可以获取不同的语义特征,以便进一步评估,但仅攻击这些语义特征并不能反映与歧视相关的脆弱性。然后我们设计SFA模块,在数据发布之前引入更多与歧视有关的梯度方向,以生成对抗性实例。全面实验表明SFA模块将攻击性实际模型的反成功率平均提高近40美元。我们进行这一精确的对抗性辩论性模型分析,为不同的说明、地理测量性图和骨干网络选择了精确性模型。这些模型可以用来改进其面面面部和脊椎网络的精确性模型。