Much of the past work on fairness in machine learning has focused on forcing the predictions of classifiers to have similar statistical properties for individuals of different demographics. Yet, such methods often simply perform a rescaling of the classifier scores and ignore whether individuals of different groups have similar features. Our proposed method, Optimal Transport to Fairness (OTF), applies Optimal Transport (OT) to take this similarity into account by quantifying unfairness as the smallest cost of OT between a classifier and any score function that satisfies fairness constraints. For a flexible class of linear fairness constraints, we show a practical way to compute OTF as an unfairness cost term that can be added to any standard classification setting. Experiments show that OTF can be used to achieve an effective trade-off between predictive power and fairness.
翻译:过去关于机器学习公平性的大部分工作都侧重于迫使分类者预测不同人口群体的个人具有类似的统计特性,然而,这类方法往往只是调整分类者的分数,忽视不同群体的个人是否具有相似的特征。 我们提出的最佳运输到公平性(OTF)方法应用最佳运输(OT)来将这种相似性考虑在内,将不公平作为分类者与任何满足公平性限制的分数函数之间最低的OT成本进行量化。 对于灵活的线性公平性限制,我们展示了一种实际方法,将OTF算作一个不公平的成本术语,可以添加到任何标准的分类设置中。实验表明,OTF可以用来实现预测力和公平性之间的有效权衡。