Recently, there has been a rising awareness that when machine learning (ML) algorithms are used to automate choices, they may treat/affect individuals unfairly, with legal, ethical, or economic consequences. Recommender systems are prominent examples of such ML systems that assist users in making high-stakes judgments. A common trend in the previous literature research on fairness in recommender systems is that the majority of works treat user and item fairness concerns separately, ignoring the fact that recommender systems operate in a two-sided marketplace. In this work, we present an optimization-based re-ranking approach that seamlessly integrates fairness constraints from both the consumer and producer-side in a joint objective framework. We demonstrate through large-scale experiments on 8 datasets that our proposed method is capable of improving both consumer and producer fairness without reducing overall recommendation quality, demonstrating the role algorithms may play in minimizing data biases.
翻译:最近,人们日益认识到,当机器学习(ML)算法被用于使选择自动化时,它们可能会不公平地对待/影响个人,产生法律、道德或经济后果。建议系统是这类ML系统协助用户作出高取量判断的突出例子。以前关于建议系统公正性的文献研究的一个共同趋势是,大多数作品分别处理用户和项目公平问题,忽视了建议系统在双面市场运作的事实。在这项工作中,我们提出了一个基于优化的重新排名方法,将消费者和生产者之间的公平限制无缝地纳入一个联合目标框架。我们通过对8个数据集的大规模实验表明,我们拟议的方法能够改善消费者和生产者的公平性,同时又不降低总体建议质量,展示了算法在尽量减少数据偏差方面可能发挥的作用。