We develop an algorithm to train individually fair learning-to-rank (LTR) models. The proposed approach ensures items from minority groups appear alongside similar items from majority groups. This notion of fair ranking is based on the definition of individual fairness from supervised learning and is more nuanced than prior fair LTR approaches that simply ensure the ranking model provides underrepresented items with a basic level of exposure. The crux of our method is an optimal transport-based regularizer that enforces individual fairness and an efficient algorithm for optimizing the regularizer. We show that our approach leads to certifiably individually fair LTR models and demonstrate the efficacy of our method on ranking tasks subject to demographic biases.
翻译:我们开发了一种算法,以培训个人公平学习到排位模式。拟议办法确保少数群体的项目与多数群体的项目相似。这种公平排名的概念基于从监督学习中对个人公平的定义,比先前公平LTR方法更加细微,后者只是确保排名模式为代表人数不足的项目提供基本程度的接触。我们方法的核心是最佳的基于运输的常规化器,它能加强个人公平性,并高效地优化定级器。我们表明,我们的方法可以证明个人公平的LTR模式,并展示了我们在受人口偏见影响的排位任务上的方法的有效性。