The presence of bias in deep models leads to unfair outcomes for certain demographic subgroups. Research in bias focuses primarily on facial recognition and attribute prediction with scarce emphasis on face detection. Existing studies consider face detection as binary classification into 'face' and 'non-face' classes. In this work, we investigate possible bias in the domain of face detection through facial region localization which is currently unexplored. Since facial region localization is an essential task for all face recognition pipelines, it is imperative to analyze the presence of such bias in popular deep models. Most existing face detection datasets lack suitable annotation for such analysis. Therefore, we web-curate the Fair Face Localization with Attributes (F2LA) dataset and manually annotate more than 10 attributes per face, including facial localization information. Utilizing the extensive annotations from F2LA, an experimental setup is designed to study the performance of four pre-trained face detectors. We observe (i) a high disparity in detection accuracies across gender and skin-tone, and (ii) interplay of confounding factors beyond demography. The F2LA data and associated annotations can be accessed at http://iab-rubric.org/index.php/F2LA.
翻译:在深层模型中存在的偏见导致某些人口分组的不公平结果。偏见研究主要侧重于面部识别和属性预测,很少强调面部检测。现有研究将面部检测视为“面部”和“非面部”类中的二进制分类。在这项工作中,我们通过面部区域定位调查面部检测领域可能存在的偏见,目前尚未探索这一地方化。由于面部区域本地化是所有面部识别管道的一项基本任务,因此必须分析在广受欢迎的深层模型中存在这种偏见的情况。大多数现有面部检测数据集缺乏适合这种分析的注释。因此,我们网络将“公平脸”本地化与属性(F2LA)数据集相匹配,并手工将每面的属性(包括面部本地化信息)进行分解。利用F2LA的广泛说明,设计一个实验组,以研究四个预先培训的面部检测器的性能。我们观察:(一)在检测性别和皮肤部之间检测到的缺陷方面存在很大的差异,以及(二)在感化之外的各种因素的相互作用。F2LA数据和相关的图表可在 httpsrusm/A/DISrbrusb.