News recommendation is important for online news services. Existing news recommendation models are usually learned from users' news click behaviors. Usually the behaviors of users with the same sensitive attributes (e.g., genders) have similar patterns and news recommendation models can easily capture these patterns. It may lead to some biases related to sensitive user attributes in the recommendation results, e.g., always recommending sports news to male users, which is unfair since users may not receive diverse news information. In this paper, we propose a fairness-aware news recommendation approach with decomposed adversarial learning and orthogonality regularization, which can alleviate unfairness in news recommendation brought by the biases of sensitive user attributes. In our approach, we propose to decompose the user interest model into two components. One component aims to learn a bias-aware user embedding that captures the bias information on sensitive user attributes, and the other aims to learn a bias-free user embedding that only encodes attribute-independent user interest information for fairness-aware news recommendation. In addition, we propose to apply an attribute prediction task to the bias-aware user embedding to enhance its ability on bias modeling, and we apply adversarial learning to the bias-free user embedding to remove the bias information from it. Moreover, we propose an orthogonality regularization method to encourage the bias-free user embeddings to be orthogonal to the bias-aware one to better distinguish the bias-free user embedding from the bias-aware one. For fairness-aware news ranking, we only use the bias-free user embedding. Extensive experiments on benchmark dataset show that our approach can effectively improve fairness in news recommendation with minor performance loss.
翻译:对在线新闻服务来说,现有的新闻建议模式非常重要。 现有的新闻建议模式通常从用户的报复性公平性学习和正反调性规范中学习。 通常, 具有相同敏感属性的用户的行为模式( 如性别) 具有相似的模式, 并且新闻建议模式可以很容易地捕捉这些模式。 这可能导致与建议结果中的敏感用户属性有关的某些偏见, 例如, 总是向男性用户推荐体育新闻, 这是不公平的, 因为用户可能得不到不同的新闻信息。 在本文中, 我们提出一个公平觉悟的公平性新闻建议方法, 与分解的对抗性公平性学习和正反调性规范, 这可以减轻由敏感用户属性偏差带来的新闻建议中的不公平性。 在我们的方法中, 我们提议将用户兴趣模式化模式化模式化的模式化模式化模式化模式化, 将一个用户的性能定位化, 以提升用户的性能。