In recommender systems, one common challenge is the cold-start problem, where interactions are very limited for fresh users in the systems. To address this challenge, recently, many works introduce the meta-optimization idea into the recommendation scenarios, i.e. learning to learn the user preference by only a few past interaction items. The core idea is to learn global shared meta-initialization parameters for all users and rapidly adapt them into local parameters for each user respectively. They aim at deriving general knowledge across preference learning of various users, so as to rapidly adapt to the future new user with the learned prior and a small amount of training data. However, previous works have shown that recommender systems are generally vulnerable to bias and unfairness. Despite the success of meta-learning at improving the recommendation performance with cold-start, the fairness issues are largely overlooked. In this paper, we propose a comprehensive fair meta-learning framework, named CLOVER, for ensuring the fairness of meta-learned recommendation models. We systematically study three kinds of fairness - individual fairness, counterfactual fairness, and group fairness in the recommender systems, and propose to satisfy all three kinds via a multi-task adversarial learning scheme. Our framework offers a generic training paradigm that is applicable to different meta-learned recommender systems. We demonstrate the effectiveness of CLOVER on the representative meta-learned user preference estimator on three real-world data sets. Empirical results show that CLOVER achieves comprehensive fairness without deteriorating the overall cold-start recommendation performance.
翻译:在推荐者系统中,一个共同的挑战就是冷启动问题,因为冷启动问题对于系统中的新用户而言互动非常有限,因此,对于新的用户来说,冷启动问题是一个共同的挑战。然而,为了应对这一挑战,最近许多工作在建议设想中引入了元优化理念,即学习仅仅通过过去几个互动项目学习用户的偏好。核心理念是为所有用户学习全球共享的元初始化参数,并迅速将其适应于每个用户的本地参数。核心理念是,为所有用户学习全球共享的元初始化参数,并迅速将其适应于每个用户的偏好学习,以确保不同用户的偏好学习,从而迅速适应未来新的用户,利用以前学到的和少量培训数据。然而,以往的工作表明,推荐者系统通常容易受到偏差和不公平的偏差。尽管在改进建议业绩时学习元数据成功,但公平问题基本上被忽视。在本文件中,我们提出了一个名为CLOVER的全面公平学习框架,以确保元化建议模式的公平性。我们系统地研究三种公平性-个人公平性、反事实公平性和集体公平性,并提议通过一个不应用的C-BEVAL全面性建议模式系统来满足所有三种类型。我们提出的C-BEVEReval AS