Fairness has become increasingly pivotal in facial recognition. Without bias mitigation, deploying unfair AI would harm the interest of the underprivileged population. In this paper, we observe that though the higher accuracy that features from the deeper layer of a neural networks generally offer, fairness conditions deteriorate as we extract features from deeper layers. This phenomenon motivates us to extend the concept of multi-exit framework. Unlike existing works mainly focusing on accuracy, our multi-exit framework is fairness-oriented, where the internal classifiers are trained to be more accurate and fairer. During inference, any instance with high confidence from an internal classifier is allowed to exit early. Moreover, our framework can be applied to most existing fairness-aware frameworks. Experiment results show that the proposed framework can largely improve the fairness condition over the state-of-the-art in CelebA and UTK Face datasets.
翻译:公平在面部识别方面变得日益重要。 不减少偏见,不公平的人工智能部署将损害贫困人口的利益。在本文中,我们观察到,虽然神经网络深层特征的准确性一般更高,但随着我们从深层提取特征,公平条件会恶化。 这一现象促使我们扩展多输出框架的概念。 与主要侧重于准确性的现有工作不同,我们的多输出框架是公平导向的,内部分类人员经过培训会更加准确和更加公平。 在推断中,内部分类人员具有高度信心的任何案例都可以提前退出。 此外,我们的框架可以适用于大多数现有的公平意识框架。实验结果表明,拟议的框架可以大大改善塞勒巴州和UTKFace数据集的公平性条件。