Face biometrics are playing a key role in making modern smart city applications more secure and usable. Commonly, the recognition threshold of a face recognition system is adjusted based on the degree of security for the considered use case. The likelihood of a match can be for instance decreased by setting a high threshold in case of a payment transaction verification. Prior work in face recognition has unfortunately showed that error rates are usually higher for certain demographic groups. These disparities have hence brought into question the fairness of systems empowered with face biometrics. In this paper, we investigate the extent to which disparities among demographic groups change under different security levels. Our analysis includes ten face recognition models, three security thresholds, and six demographic groups based on gender and ethnicity. Experiments show that the higher the security of the system is, the higher the disparities in usability among demographic groups are. Compelling unfairness issues hence exist and urge countermeasures in real-world high-stakes environments requiring severe security levels.
翻译:在使现代智能城市应用更安全和更便于使用方面,面部生物鉴别技术正在发挥关键作用。通常,面部识别系统的识别阈值是根据考虑使用案例的安全程度调整的。例如,通过在付款交易核查中设定一个高门槛,可以降低匹配的可能性。不幸的是,先前的表面识别工作表明,某些人口群体中的误差率通常较高。因此,这些差异使人们对以脸部生物鉴别技术增强能力的系统的公正性产生疑问。在本文件中,我们研究了不同安全级别下人口群体差异变化的程度。我们的分析包括10个面部识别模型、3个安全阈值和6个基于性别和族裔的人口群体。实验表明,系统的安全程度越高,人口群体中的可使用性差异就越大。因此存在不公平问题,敦促在现实世界需要严格安全等级的高风险环境中采取对策。