Fairness is an important property in data-mining applications, including recommender systems. In this work, we investigate a case where users of a recommender system need (or want) to be fair to a protected group of items. For example, in a job market, the user is the recruiter, an item is the job seeker, and the protected attribute is gender or race. Even if recruiters want to use a fair talent recommender system, the platform may not provide a fair recommender system, or recruiters may not be able to ascertain whether the recommender system's algorithm is fair. In this case, recruiters cannot utilize the recommender system, or they may become unfair to job seekers. In this work, we propose methods to enable the users to build their own fair recommender systems. Our methods can generate fair recommendations even when the platform does not (or cannot) provide fair recommender systems. The key challenge is that a user does not have access to the log data of other users or the latent representations of items. This restriction prohibits us from adopting existing methods, which are designed for platforms. The main idea is that a user has access to unfair recommendations provided by the platform. Our methods leverage the outputs of an unfair recommender system to construct a new fair recommender system. We empirically validate that our proposed method improves fairness substantially without harming much performance of the original unfair system.
翻译:公平是数据挖掘应用中的重要属性, 包括推荐人系统。 在这项工作中, 我们调查了一个案例, 推荐人的系统用户需要( 或希望) 公平对待受保护的一组项目。 例如, 在就业市场中, 用户是招聘者, 项目是招聘者, 受保护的属性是性别或种族。 即使招聘者希望使用公平的人才推荐人系统, 平台可能无法提供公平的推荐人系统, 或者招聘者可能无法确定推荐人的系统算法是否公平。 在这种情况下, 招聘者无法使用推荐人系统, 或者他们可能变得对求职者不公平。 在这项工作中, 我们提出方法, 使用户能够建立自己的公平推荐人系统。 我们的方法可以产生公平的建议, 即使平台不( 或不能) 提供公平的推荐人系统。 关键的挑战是, 用户无法获取其他用户的日志数据或者项目的潜在表现。 这一限制禁止我们采用为平台设计的现有方法。 主要的理念是, 招聘者无法利用推荐人系统, 或他们可能变得不公平地对求职者不公平。 在这项工作中, 我们的杠杆方法可以大大地改进了我们最初验证系统。