As a key application of artificial intelligence, recommender systems are among the most pervasive computer aided systems to help users find potential items of interests. Recently, researchers paid considerable attention to fairness issues for artificial intelligence applications. Most of these approaches assumed independence of instances, and designed sophisticated models to eliminate the sensitive information to facilitate fairness. However, recommender systems differ greatly from these approaches as users and items naturally form a user-item bipartite graph, and are collaboratively correlated in the graph structure. In this paper, we propose a novel graph based technique for ensuring fairness of any recommendation models. Here, the fairness requirements refer to not exposing sensitive feature set in the user modeling process. Specifically, given the original embeddings from any recommendation models, we learn a composition of filters that transform each user's and each item's original embeddings into a filtered embedding space based on the sensitive feature set. For each user, this transformation is achieved under the adversarial learning of a user-centric graph, in order to obfuscate each sensitive feature between both the filtered user embedding and the sub graph structures of this user. Finally, extensive experimental results clearly show the effectiveness of our proposed model for fair recommendation. We publish the source code at https://github.com/newlei/FairGo.
翻译:作为人工智能的关键应用,推荐者系统是帮助用户找到潜在兴趣项目的最普及的计算机辅助系统之一。最近,研究人员相当关注人工智能应用的公平问题。这些方法大多假定了实例的独立性,设计了消除敏感信息的先进模型以促进公平性。然而,推荐者系统与这些方法大不相同,因为用户和项目自然形成一个用户项目双片图,在图形结构中具有协作性。在本文中,我们提出了一个基于图表的新颖技术,以确保任何建议模型的公平性。这里,公平要求指的是不暴露用户模型进程中设定的敏感特征。具体地说,鉴于最初从任何建议模型中嵌入的,我们学习了过滤器的构成,根据敏感特征集将每个用户和每个项目的原始嵌入纳入过滤器嵌入一个过滤嵌入的嵌入空间。对于每个用户来说,这种转变都是在用户中心图的对抗性学习下实现的。为了混淆过滤器嵌入用户和该用户子图表结构之间的每一个敏感特征。最后,我们从任何建议模型中都明确展示了我们提议的公平源的效能。