Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predictions between the groups. Nevertheless, even though the constraints are satisfied during training, they might not generalize at evaluation time. To improve the generalizability of fair classifiers, we propose fair mixup, a new data augmentation strategy for imposing the fairness constraint. In particular, we show that fairness can be achieved by regularizing the models on paths of interpolated samples between the groups. We use mixup, a powerful data augmentation strategy to generate these interpolates. We analyze fair mixup and empirically show that it ensures a better generalization for both accuracy and fairness measurement in tabular, vision, and language benchmarks.
翻译:然而,尽管在培训期间满足了这些限制因素,但在评估时,它们可能不会一概而论。为了提高公平分类者的可普遍性,我们提议公平混合,即实行公平限制的新的数据增加战略。我们特别表明,通过规范各群体之间相互交错样本的模型,可以实现公平性。我们使用一种强有力的数据增加战略,即混合法来产生这些内插。我们分析公平的混和实证表明,它确保了在表格、视觉和语言基准中更好地实现准确性和公平性衡量。