Recently, much attention has been paid to the societal impact of AI, especially concerns regarding its fairness. A growing body of research has identified unfair AI systems and proposed methods to debias them, yet many challenges remain. Representation learning for Heterogeneous Information Networks (HINs), a fundamental building block used in complex network mining, has socially consequential applications such as automated career counseling, but there have been few attempts to ensure that it will not encode or amplify harmful biases, e.g. sexism in the job market. To address this gap, in this paper we propose a comprehensive set of de-biasing methods for fair HINs representation learning, including sampling-based, projection-based, and graph neural networks (GNNs)-based techniques. We systematically study the behavior of these algorithms, especially their capability in balancing the trade-off between fairness and prediction accuracy. We evaluate the performance of the proposed methods in an automated career counseling application where we mitigate gender bias in career recommendation. Based on the evaluation results on two datasets, we identify the most effective fair HINs representation learning techniques under different conditions.
翻译:最近,人们非常关注大赦国际的社会影响,特别是其公正性问题。越来越多的研究发现,AI系统不公平,并提出了贬低这些系统的方法,但仍然存在许多挑战。在复杂的网络采矿中,一个基本组成部分,即异质信息网络(HINs)的代表性学习具有社会影响的应用,例如自动化职业咨询,但很少试图确保它不会将有害的偏见,例如职业市场上的性别歧视,编码或扩大。为了弥补这一差距,我们在本文件中提出了一套全面的消除偏见的方法,用于公平HINs代表制学习,包括基于抽样的、基于预测的和基于图形的神经网络(GNNs)技术。我们系统地研究这些算法的行为,特别是它们平衡公平与预测准确性之间的权衡能力。我们评估了在自动化职业咨询应用中拟议方法的绩效,我们在此过程中减少了职业中的性别偏见。根据两个数据集的评价结果,我们确定了在不同条件下最有效的公平 HINs代表制学习技术。