Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research. It is especially important in multi-sided recommendation platforms where it may be crucial to optimize utilities not just for the end user, but also for other actors such as item sellers or producers who desire a fair representation of their items. Existing solutions do not properly address various aspects of multi-sided fairness in recommendations as they may either solely have one-sided view (i.e. improving the fairness only for one side), or do not appropriately measure the fairness for each actor involved in the system. In this thesis, I aim at first investigating the impact of unfair recommendations on the system and how these unfair recommendations can negatively affect major actors in the system. Then, I seek to propose solutions to tackle the unfairness of recommendations. I propose a rating transformation technique that works as a pre-processing step before building the recommendation model to alleviate the inherent popularity bias in the input data and consequently to mitigate the exposure unfairness for items and suppliers in the recommendation lists. Also, as another solution, I propose a general graph-based solution that works as a post-processing approach after recommendation generation for mitigating the multi-sided exposure bias in the recommendation results. For evaluation, I introduce several metrics for measuring the exposure fairness for items and suppliers, and show that these metrics better capture the fairness properties in the recommendation results. I perform extensive experiments to evaluate the effectiveness of the proposed solutions. The experiments on different publicly-available datasets and comparison with various baselines confirm the superiority of the proposed solutions in improving the exposure fairness for items and suppliers.


翻译:公平是建议系统的一个至关重要的系统层面目标,是最近广泛研究的主题;公平是建议系统的一个至关重要的系统层面目标,在多面性建议平台中尤其重要,因为不仅对终端用户,而且对希望公平陈述项目内容的物品销售商或生产者等其他行为者而言,优化公用事业可能至关重要;现有解决方案不能适当解决建议中多方面公平的各个方面,因为它们可能只持有片面观点(即只对一方提高公平性),或没有适当衡量系统所涉每个行为者的公平性比较;在此论点中,我首先调查不公平建议对系统不公平建议的影响,以及这些不公平建议如何对系统内的主要行为者产生消极影响;然后,我力求提出解决办法,以解决建议中的不公平问题,不仅对终端用户,而且对项目销售者或生产者而言,如希望对项目进行公平性代表;我提议,在建立建议模型之前,采用评级转换技术,以减轻输入数据数据数据数据的固有偏差,从而减轻建议清单中的项目和供应商所受的不公平程度;另外,作为另一种解决办法,我提议采用基于一般图表的解决办法,作为后处理方法,在提出建议后处理方法,如何对系统的主要行为者产生消极影响;然后,在建议中,在评估关于减少多面性风险评估,在评估中,在评估中,衡量各种风险评价。

0
下载
关闭预览

相关内容

强化学习最新教程,17页pdf
专知会员服务
174+阅读 · 2019年10月11日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
103+阅读 · 2019年10月9日
【综述笔记】Graph Neural Networks in Recommender Systems
图与推荐
5+阅读 · 2020年12月8日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
LibRec 精选:推荐系统的论文与源码
LibRec智能推荐
14+阅读 · 2018年11月29日
LibRec 精选:推荐的可解释性[综述]
LibRec智能推荐
10+阅读 · 2018年5月4日
人工智能 | 国际会议截稿信息9条
Call4Papers
4+阅读 · 2018年3月13日
推荐|Andrew Ng计算机视觉教程总结
全球人工智能
3+阅读 · 2017年11月23日
Arxiv
92+阅读 · 2020年2月28日
Arxiv
6+阅读 · 2018年3月28日
VIP会员
Top
微信扫码咨询专知VIP会员