Today, recommender systems have played an increasingly important role in shaping our experiences of digital environments and social interactions. However, as recommender systems become ubiquitous in our society, recent years have also witnessed significant fairness concerns for recommender systems. Specifically, studies have shown that recommender systems may inherit or even amplify biases from historical data, and as a result, provide unfair recommendations. To address fairness risks in recommender systems, most of the previous approaches to date are focused on modifying either the existing training data samples or the deployed recommender algorithms, but unfortunately with limited degrees of success. In this paper, we propose a new approach called fair recommendation with optimized antidote data (FairRoad), which aims to improve the fairness performances of recommender systems through the construction of a small and carefully crafted antidote dataset. Toward this end, we formulate our antidote data generation task as a mathematical optimization problem, which minimizes the unfairness of the targeted recommender systems while not disrupting the deployed recommendation algorithms. Extensive experiments show that our proposed antidote data generation algorithm significantly improve the fairness of recommender systems with a small amounts of antidote data.
翻译:今天,推荐者系统在塑造我们的数字环境和社会互动经验方面发挥了越来越重要的作用,然而,由于推荐者系统在我们社会中变得无处不在,近年来也出现了对推荐者系统的重大公平关注。具体地说,研究表明推荐者系统可能继承或甚至扩大历史数据的偏差,从而提供了不公平的建议。为了解决推荐者系统中的公平风险,迄今为止大多数做法侧重于修改现有的培训数据样本或部署的推荐者算法,但不幸的是,成功程度有限。在本文件中,我们提出了一个称为“优化解药数据公平建议”的新方法(FairRoad ),其目的是通过建立一个小型和精心设计的解毒数据集,提高推荐者系统的公平性性能。为此,我们把解毒数据生成任务设计成一个数学优化问题,将目标推荐者系统不公平的程度降至最低,同时不破坏部署的建议算法。广泛的实验表明,我们提议的解毒数据生成计算法大大改进了建议者系统与少量解毒数据之间的公平性。