The performance of deep neural networks for image recognition tasks such as predicting a smiling face is known to degrade with under-represented classes of sensitive attributes. We address this problem by introducing fairness-aware regularization losses based on batch estimates of Demographic Parity, Equalized Odds, and a novel Intersection-over-Union measure. The experiments performed on facial and medical images from CelebA, UTKFace, and the SIIM-ISIC melanoma classification challenge show the effectiveness of our proposed fairness losses for bias mitigation as they improve model fairness while maintaining high classification performance. To the best of our knowledge, our work is the first attempt to incorporate these types of losses in an end-to-end training scheme for mitigating biases of visual attribute predictors. Our code is available at https://github.com/nish03/FVAP.
翻译:预测笑脸等图像识别任务的深层神经网络的性能已知会因代表人数不足的敏感属性类别而退化。我们通过根据对人口均等、偶数和新颖的跨联盟措施的批量估计,引入公平意识的正规化损失来解决这一问题。对CelebA、UTKFace的面部和医疗图像进行的实验以及SIIM-ISICIS的黑皮瘤分类挑战显示了我们为减少偏见而提出的公平损失的有效性,因为这些损失提高了模型的公平性,同时保持高的分类性能。据我们所知,我们的工作是首次尝试将这些类型的损失纳入旨在减轻视觉属性预测器偏见的端到端培训计划。我们的代码可在https://github.comness03/FVAP查阅。