Given the abundance of applications of ranking in recent years, addressing fairness concerns around automated ranking systems becomes necessary for increasing the trust among end-users. Previous work on fair ranking has mostly focused on application-specific fairness notions, often tailored to online advertising, and it rarely considers learning as part of the process. In this work, we show how to transfer numerous fairness notions from binary classification to a learning to rank context. Our formalism allows us to design a method for incorporating fairness objectives with provable generalization guarantees. An extensive experimental evaluation shows that our method can improve ranking fairness substantially with no or only little loss of model quality.
翻译:鉴于近年来有大量的排名应用,为了增加最终用户之间的信任,必须解决自动化排名系统方面的公平问题。 以往的公平排名工作主要侧重于具体应用的公平概念,通常适合网上广告,很少将学习视为这一过程的一部分。 在这项工作中,我们展示了如何将许多公平概念从二进制分类转为学习,以排名为背景。我们的形式主义使我们能够设计一种方法,将公平目标纳入可证实的普遍化保障。 广泛的实验性评估表明,我们的方法可以大大改善排名的公平性,而不会或很少失去模型质量。