Billions of people are sharing their daily life images on social media every day. However, their biometric information (e.g., fingerprint) could be easily stolen from these images. The threat of fingerprint leakage from social media raises a strong desire for anonymizing shared images while maintaining image qualities, since fingerprints act as a lifelong individual biometric password. To guard the fingerprint leakage, adversarial attack emerges as a solution by adding imperceptible perturbations on images. However, existing works are either weak in black-box transferability or appear unnatural. Motivated by visual perception hierarchy (i.e., high-level perception exploits model-shared semantics that transfer well across models while low-level perception extracts primitive stimulus and will cause high visual sensitivities given suspicious stimulus), we propose FingerSafe, a hierarchical perceptual protective noise injection framework to address the mentioned problems. For black-box transferability, we inject protective noises on fingerprint orientation field to perturb the model-shared high-level semantics (i.e., fingerprint ridges). Considering visual naturalness, we suppress the low-level local contrast stimulus by regularizing the response of Lateral Geniculate Nucleus. Our FingerSafe is the first to provide feasible fingerprint protection in both digital (up to 94.12%) and realistic scenarios (Twitter and Facebook, up to 68.75%). Our code can be found at https://github.com/nlsde-safety-team/FingerSafe.
翻译:数十亿人每天都在社交媒体上分享他们的日常生活图像。然而,他们的生物鉴别信息(例如指纹)很容易被从这些图像中窃取。由于指纹作为终身个人生物鉴别密码的作用,社交媒体的指纹渗漏威胁使人们强烈渴望在保持图像质量的同时对共享图像进行匿名,因为指纹是一种终身的个人生物鉴别密码。为了保护指纹泄漏,对抗性攻击作为一种解决办法,在图像上添加不易察觉的扰动。然而,现有的作品要么在黑箱可转移性中比较弱,要么看起来不自然。受视觉感知等级的激励(例如,高层感知利用模型共享的语义学模型,在低层次感知提取原始刺激,并在可疑刺激下将引起高视觉敏感性。我们提议FingSafe,一个等级感知性保护性噪音注射框架,以解决上述问题。对于黑箱可移植性,我们在指纹定向场上注入保护性噪音,以搅过模型可分摊的高层次的Smartical(即指纹)的等级结构结构。我们抑制低层次的本地对比性语言结构,通过常规的Snical-ficial Special Speal-fistrive Proview laction 。