Due to the emergence of graph neural networks (GNNs) and their widespread implementation in real-world scenarios, the fairness and privacy of GNNs have attracted considerable interest since they are two essential social concerns in the era of building trustworthy GNNs. Existing studies have respectively explored the fairness and privacy of GNNs and exhibited that both fairness and privacy are at the cost of GNN performance. However, the interaction between them is yet to be explored and understood. In this paper, we investigate the interaction between the fairness of a GNN and its privacy for the first time. We empirically identify that edge privacy risks increase when the individual fairness of nodes is improved. Next, we present the intuition behind such a trade-off and employ the influence function and Pearson correlation to measure it theoretically. To take the performance, fairness, and privacy of GNNs into account simultaneously, we propose implementing fairness-aware reweighting and privacy-aware graph structure perturbation modules in a retraining mechanism. Experimental results demonstrate that our method is effective in implementing GNN fairness with limited performance cost and restricted privacy risks.
翻译:由于出现了图形神经网络(GNN)及其在现实世界中的广泛实施,GNN的公平和隐私引起了相当大的兴趣,因为这些网络是建设值得信赖的GNN时代的两个基本社会关切。现有的研究分别探讨了GNN的公平和隐私问题,并表明,公平和隐私都是以GNN的性能为代价的。然而,它们之间的互动还有待探讨和理解。在本文件中,我们第一次调查GNN的公平性与其隐私之间的相互作用。我们从经验上确定,当节点的个人公平性得到改善时,边缘隐私风险会增加。接下来,我们提出这种交易背后的直觉,并利用影响功能和Pearson的关联性来从理论上衡量它。为了同时考虑GNN的性能、公平和隐私,我们提议在再培训机制中采用公平觉醒和隐私透视图结构。实验结果表明,我们的方法在以有限的性能成本和有限的隐私风险实施GNNN的公正性方面是有效的。