Demographic bias is a significant challenge in practical face recognition systems. Existing methods heavily rely on accurate demographic annotations. However, such annotations are usually unavailable in real scenarios. Moreover, these methods are typically designed for a specific demographic group and are not general enough. In this paper, we propose a false positive rate penalty loss, which mitigates face recognition bias by increasing the consistency of instance False Positive Rate (FPR). Specifically, we first define the instance FPR as the ratio between the number of the non-target similarities above a unified threshold and the total number of the non-target similarities. The unified threshold is estimated for a given total FPR. Then, an additional penalty term, which is in proportion to the ratio of instance FPR overall FPR, is introduced into the denominator of the softmax-based loss. The larger the instance FPR, the larger the penalty. By such unequal penalties, the instance FPRs are supposed to be consistent. Compared with the previous debiasing methods, our method requires no demographic annotations. Thus, it can mitigate the bias among demographic groups divided by various attributes, and these attributes are not needed to be previously predefined during training. Extensive experimental results on popular benchmarks demonstrate the superiority of our method over state-of-the-art competitors. Code and trained models are available at https://github.com/Tencent/TFace.
翻译:人口偏见是实际承认制度面临的一个重大挑战。现有方法在很大程度上依赖准确的人口说明。然而,这种说明通常在真实情况下是不存在的。此外,这些方法通常是为特定人口群体设计的,而且不够一般。我们在本文件中提出假正赔率损失,通过提高假正赔率的一致性来减轻承认偏见。具体地说,我们首先将FPR定义为非目标相似率在统一阈值以上与非目标相似数总数之间的比例。对于一个全部FPR估算出统一门槛值。然后,在基于软通税损失的分数中增加一个与FPR总体FPR的比例成正比的处罚期。FPR越大,惩罚就越大。通过这种不平等的处罚,FPR应该一致。与先前的降低比重方法相比,我们的方法不需要人口统计方面的相似性说明。因此,可以减少按不同属性划分的人口群体之间的偏差,而这些属性在培训期间无需事先界定。在MFPR/FPR/FPR中标定出一个比重的比例。在FPR/FPR标准上越大。在培训期间,经过广泛的实验性标准中展示了MFA/FAT/CRM 。在标准上展示了比重/CRCRMRMRM/CRBS/FM 。