Many existing group fairness-aware training methods aim to achieve the group fairness by either re-weighting underrepresented groups based on certain rules or using weakly approximated surrogates for the fairness metrics in the objective as regularization terms. Although each of the learning schemes has its own strength in terms of applicability or performance, respectively, it is difficult for any method in the either category to be considered as a gold standard since their successful performances are typically limited to specific cases. To that end, we propose a principled method, dubbed as \ours, which unifies the two learning schemes by incorporating a well-justified group fairness metric into the training objective using a class wise distributionally robust optimization (DRO) framework. We then develop an iterative optimization algorithm that minimizes the resulting objective by automatically producing the correct re-weights for each group. Our experiments show that FairDRO is scalable and easily adaptable to diverse applications, and consistently achieves the state-of-the-art performance on several benchmark datasets in terms of the accuracy-fairness trade-off, compared to recent strong baselines.
翻译:许多现有的群体公平意识培训方法旨在通过根据某些规则重新加权代表人数不足的群体,或者使用较弱的近似替代方法,实现群体公平性。虽然每个学习计划在适用性或业绩方面各自都有其自己的力量,但很难将任一类别中的任何方法视为黄金标准,因为其成功表现通常限于特定情况。为此,我们提议了一个称为“我们”的原则性方法,将两个学习计划统一起来,利用一个明智的分布强力优化框架,在培训目标中纳入一个合理合理的群体公平性衡量标准。然后,我们开发一个迭代优化算法,通过自动为每个群体提供正确的再加权来尽量减少由此产生的目标。我们的实验表明,与最近的强基准相比,FairDRO是可扩展的,易于适应不同应用,并且始终在精确性交易方面的几个基准数据集上实现最先进的业绩。</s>