Biometric data, such as face images, are often associated with sensitive information (e.g medical, financial, personal government records). Hence, a data breach in a system storing such information can have devastating consequences. Deep learning is widely utilized for face recognition (FR); however, such models are vulnerable to backdoor attacks executed by malicious parties. Backdoor attacks cause a model to misclassify a particular class as a target class during recognition. This vulnerability can allow adversaries to gain access to highly sensitive data protected by biometric authentication measures or allow the malicious party to masquerade as an individual with higher system permissions. Such breaches pose a serious privacy threat. Previous methods integrate noise addition mechanisms into face recognition models to mitigate this issue and improve the robustness of classification against backdoor attacks. However, this can drastically affect model accuracy. We propose a novel and generalizable approach (named BA-BAM: Biometric Authentication - Backdoor Attack Mitigation), that aims to prevent backdoor attacks on face authentication deep learning models through transfer learning and selective image perturbation. The empirical evidence shows that BA-BAM is highly robust and incurs a maximal accuracy drop of 2.4%, while reducing the attack success rate to a maximum of 20%. Comparisons with existing approaches show that BA-BAM provides a more practical backdoor mitigation approach for face recognition.
翻译:表面图像等生物计量数据往往与敏感信息(如医疗、财务、个人政府记录)相关。因此,在储存此类信息的系统中,数据破坏可能带来破坏性后果。广泛利用深度学习来进行面部识别(FR);然而,这类模型很容易受到恶意方的幕后攻击;幕后攻击导致一种模型错误地将某一特定类别归类为识别过程中的目标类别;这种脆弱性可能使对手获得生物鉴别认证措施保护的高度敏感数据,或允许恶意方以系统允许程度较高的个人身份进行化妆。这种违规行为构成了严重的隐私威胁。以往的方法是将噪音添加机制纳入面部识别模型,以减轻这一问题,并改进对幕后攻击的准确性。然而,这可能会极大地影响模型的准确性。我们提出了一个创新和可推广的方法(称为BA-BAM:生物测定校准-幕后攻击减缓),目的是通过转移学习和选择性图像来防止面部对认证深层学习模型进行后攻击。经验证据表明,BA-BAM是高度稳健健的,并导致2.4%的最高准确性下降,同时将现有攻击率与B的反向后比较,同时提供现有攻击率比。