We tackle the problem of group fairness in classification, where the objective is to learn models that do not unjustly discriminate against subgroups of the population. Most existing approaches are limited to simple binary tasks or involve difficult to implement training mechanisms. This reduces their practical applicability. In this paper, we propose FairGrad, a method to enforce fairness based on a reweighting scheme that iteratively learns group specific weights based on whether they are advantaged or not. FairGrad is easy to implement and can accommodate various standard fairness definitions. Furthermore, we show that it is comparable to standard baselines over various datasets including ones used in natural language processing and computer vision.
翻译:我们处理分类中的群体公平问题,其目标是学习不不公正地歧视人口分组的模式,大多数现有办法仅限于简单的二进制任务,或难以实施培训机制。这降低了这些办法的实际适用性。在本文件中,我们提出FairGrad, 一种基于重新加权办法的实行公平的方法,这种办法反复学习基于是否优势的集团特定权重。FairGrad易于实施,而且可以容纳各种标准公平定义。此外,我们表明,它与各种数据集的标准基线相当,包括自然语言处理和计算机视觉中使用的数据。