The proliferation of automated face recognition in various commercial and government sectors has caused significant privacy concerns for individuals. A recent, popular approach to address these privacy concerns is to employ evasion attacks against the metric embedding networks powering face recognition systems. Face obfuscation systems generate imperceptible perturbations, when added to an image, cause the face recognition system to misidentify the user. The key to these approaches is the generation of perturbations using a pre-trained metric embedding network followed by their application to an online system, whose model might be proprietary. This dependence of face obfuscation on metric embedding networks, which are known to be unfair in the context of face recognition, surfaces the question of demographic fairness -- \textit{are there demographic disparities in the performance of face obfuscation systems?} To address this question, we perform an analytical and empirical exploration of the performance of recent face obfuscation systems that rely on deep embedding networks. We find that metric embedding networks are demographically aware; they cluster faces in the embedding space based on their demographic attributes. We observe that this effect carries through to face obfuscation systems: faces belonging to minority groups incur reduced utility compared to those from majority groups. For example, the disparity in average obfuscation success rate on the online Face++ API can reach up to 20 percentage points. We present an intuitive analytical model to provide insights into these phenomena.
翻译:各个商业和政府部门自动化面部识别系统的扩散引起了个人对隐私的极大关切。最近,解决这些隐私关切的流行做法是利用规避攻击强化面部识别系统的标度嵌入网络。面对模糊的系统在添加图像时会产生无法察觉的扰动,导致脸部识别系统误认用户。这些方法的关键是利用预先培训的标度嵌入网络产生扰动,然后将其应用到一个在线系统,其模型可能是专有的。对标度嵌入网络的这种依赖性就是对标度嵌入网络的不理解性攻击,在面部识别方面已知的不公平。 面对人口公平问题 -- --\ textit{ 显示在面部识别系统的性能方面存在人口差异?}为解决这一问题,我们对依赖深层嵌入网络的最近面部识别系统的表现进行了分析和经验探索。我们发现,标度嵌入网络具有人口特征意识;基于其人口特征的嵌入空间的群集面部。我们观察到,在面部识别中,从分析性效用到图中,这种影响从少数群体的面面面部,到图面图面上的图面图。