Algorithmic fairness plays an important role in machine learning and imposing fairness constraints during learning is a common approach. However, many datasets are imbalanced in certain label classes (e.g. "healthy") and sensitive subgroups (e.g. "older patients"). Empirically, this imbalance leads to a lack of generalizability not only of classification, but also of fairness properties, especially in over-parameterized models. For example, fairness-aware training may ensure equalized odds (EO) on the training data, but EO is far from being satisfied on new users. In this paper, we propose a theoretically-principled, yet Flexible approach that is Imbalance-Fairness-Aware (FIFA). Specifically, FIFA encourages both classification and fairness generalization and can be flexibly combined with many existing fair learning methods with logits-based losses. While our main focus is on EO, FIFA can be directly applied to achieve equalized opportunity (EqOpt); and under certain conditions, it can also be applied to other fairness notions. We demonstrate the power of FIFA by combining it with a popular fair classification algorithm, and the resulting algorithm achieves significantly better fairness generalization on several real-world datasets.
翻译:例如,公平意识培训可以确保培训数据方面的机会均等(EOO),但EO远未满足新用户的要求。在本文中,我们提出了一种理论原则但灵活的方法,即Immmech-Fairness-Aware(FIFA),具体地说,国际足联鼓励分类和公平化,并可以灵活地与许多现有的公平学习方法相结合,加上基于登录的损失。虽然我们的主要重点是EO,但国际足联可以直接用于实现机会均等(EqOpt);在某些条件下,它也可以用于其他公平概念。我们通过将它与若干普遍公平性数据分类相结合,展示国际足联的力量。我们通过将它与若干普遍公平性算法和最终实现更佳的公平性算法相结合,从而展示了全球足联的力量。