In this paper, we cast fair machine learning as invariant machine learning. We first formulate a version of individual fairness that enforces invariance on certain sensitive sets. We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently. Our theoretical results guarantee the proposed approach trains certifiably fair ML models. Finally, in the experimental studies we demonstrate improved fairness metrics in comparison to several recent fair training procedures on three ML tasks that are susceptible to algorithmic bias.
翻译:在本文中,我们把公平机器学习描绘为无所不在的机器学习。我们首先制定个人公平,对某些敏感组别实施无所不为。然后我们设计一个基于运输的正规化器,强制实行这种个人公平化,并开发一种算法,以有效尽量减少常规化。我们的理论结果保证了拟议方法能培训可证实公平的ML模型。最后,在实验研究中,我们展示了与最近关于三种 ML 任务的一些公平培训程序相比,公平度量的改进,而三种 ML 任务则容易受到算法偏差。