Face recognition algorithms, when used in the real world, can be very useful, but they can also be dangerous when biased toward certain demographics. So, it is essential to understand how these algorithms are trained and what factors affect their accuracy and fairness to build better ones. In this study, we shed some light on the effect of racial distribution in the training data on the performance of face recognition models. We conduct 16 different experiments with varying racial distributions of faces in the training data. We analyze these trained models using accuracy metrics, clustering metrics, UMAP projections, face quality, and decision thresholds. We show that a uniform distribution of races in the training datasets alone does not guarantee bias-free face recognition algorithms and how factors like face image quality play a crucial role. We also study the correlation between the clustering metrics and bias to understand whether clustering is a good indicator of bias. Finally, we introduce a metric called racial gradation to study the inter and intra race correlation in facial features and how they affect the learning ability of the face recognition models. With this study, we try to bring more understanding to an essential element of face recognition training, the data. A better understanding of the impact of training data on the bias of face recognition algorithms will aid in creating better datasets and, in turn, better face recognition systems.
翻译:当在现实世界中使用面对面的识别算法时,它可能非常有用,但在偏向某些人口统计时,它们也可能是危险的。因此,必须了解这些算法是如何训练的,以及哪些因素影响其准确性和公正性,以建立更好的算法。在这项研究中,我们从培训数据中可以看出种族分布对面表识别模型的性能的影响。我们在培训数据中进行16项不同的种族分布不同实验,对面部分布进行不同的种族分类。我们利用精确度量度、组合度量、UMAP预测、面部质量和决定阈值分析这些经过训练的模型。我们通过这项研究,发现仅培训数据集中的种族分布统一并不能保证没有偏差的面部识别算法以及像脸部质量这样的因素如何发挥关键作用。我们还研究了组合指标和偏见之间的关系,以了解集群是否是偏见的良好指标。最后,我们引入了一种称为种族分级的模型,以研究面部特征的种族间和内部相关性,以及它们如何影响面部识别模型的学习能力。通过这项研究,我们试图将更多的理解面面部识别培训的基本要素,将数据转化为对数据的认识。