We consider the task of enforcing individual fairness in gradient boosting. Gradient boosting is a popular method for machine learning from tabular data, which arise often in applications where algorithmic fairness is a concern. At a high level, our approach is a functional gradient descent on a (distributionally) robust loss function that encodes our intuition of algorithmic fairness for the ML task at hand. Unlike prior approaches to individual fairness that only work with smooth ML models, our approach also works with non-smooth models such as decision trees. We show that our algorithm converges globally and generalizes. We also demonstrate the efficacy of our algorithm on three ML problems susceptible to algorithmic bias.
翻译:我们考虑的是在梯度推升中加强个人公平性的任务。 渐进推升是一种常用的机器从表格数据中学习的方法,它经常出现在算法公平性引起关注的应用中。 在高层次上,我们的方法是一种功能性梯度下降的功能性梯度下降,在(分布性)稳健的损失函数上将我们掌握的 ML 任务算法公平性直觉编码。 与先前的个人公平性方法不同,只有光滑 ML 模型才能发挥作用,我们的方法也与决策树等非移动模型相配合。 我们显示我们的算法是全球趋同和笼统的。 我们还展示了我们对于三种容易产生算法偏差的 ML 问题的算法效力。