The rise of graph representation learning as the primary solution for many different network science tasks led to a surge of interest in the fairness of this family of methods. Link prediction, in particular, has a substantial social impact. However, link prediction algorithms tend to increase the segregation in social networks by disfavoring the links between individuals in specific demographic groups. This paper proposes a novel way to enforce fairness on graph neural networks with a fine-tuning strategy. We Drop the unfair Edges and, simultaneously, we Adapt the model's parameters to those modifications, DEA in short. We introduce two covariance-based constraints designed explicitly for the link prediction task. We use these constraints to guide the optimization process responsible for learning the new "fair" adjacency matrix. One novelty of DEA is that we can use a discrete yet learnable adjacency matrix in our fine-tuning. We demonstrate the effectiveness of our approach on five real-world datasets and show that we can improve both the accuracy and the fairness of the link prediction tasks. In addition, we present an in-depth ablation study demonstrating that our training algorithm for the adjacency matrix can be used to improve link prediction performances during training. Finally, we compute the relevance of each component of our framework to show that the combination of both the constraints and the training of the adjacency matrix leads to optimal performances.
翻译:作为许多不同网络科学任务的首要解决方案,图表代表学习的崛起成为了许多不同网络科学任务的主要解决方案,导致人们对这一方法的公平性的兴趣激增。特别是,链接预测具有巨大的社会影响。然而,链接预测算法往往会通过不赞同特定人口群体中个人之间的联系而增加社会网络的隔离。本文提出了一种新颖的方法,用微调战略在图形神经网络上实现公平。我们放下不公平的边缘,同时,我们将模型的参数与这些修改相适应,简而言之,DEA。我们引入了两种基于共变的限制因素,明确为连接预测任务设计。我们利用这些限制因素来指导负责学习新的“公平”相邻关系矩阵的优化进程。DEA的一个新颖之处是,我们可以使用一个不相干但又可以学习的匹配矩阵,在微调战略中,我们展示了我们在五个真实世界数据集上的做法的有效性,并表明我们能够提高模型的准确性和公正性。此外,我们提出一个深入的基于差异预测的制约因素研究,表明我们用于学习新的“公平”相邻关系矩阵的训练算法,我们最后的模型可以用来改进我们业绩的组合框架。