Graph learning models are critical tools for researchers to explore graph-structured data. To train a capable graph learning model, a conventional method uses sufficient training data to train a graph model on a single device. However, it is prohibitive to do so in real-world scenarios due to privacy concerns. Federated learning provides a feasible solution to address such limitations via introducing various privacy-preserving mechanisms, such as differential privacy on graph edges. Nevertheless, differential privacy in federated graph learning secures the classified information maintained in graphs. It degrades the performances of the graph learning models. In this paper, we investigate how to implement differential privacy on graph edges and observe the performances decreasing in the experiments. We also note that the differential privacy on graph edges introduces noises to perturb graph proximity, which is one of the graph augmentations in graph contrastive learning. Inspired by that, we propose to leverage the advantages of graph contrastive learning to alleviate the performance dropping caused by differential privacy. Extensive experiments are conducted with several representative graph models and widely-used datasets, showing that contrastive learning indeed alleviates the models' performance dropping caused by differential privacy.
翻译:图表学习模型是研究人员探索图表结构数据的关键工具。 训练有能力的图表学习模型, 一种常规方法使用足够的培训数据来在单一设备上培训图形模型。 但是,由于隐私问题, 在现实世界情景中这样做是令人望而生畏的。 联邦学习提供了一个可行的解决方案,通过采用各种隐私保护机制来解决这些局限性, 如在图形边缘采用不同的隐私。 然而, 联合图形学习中的隐私差异可以保证图表中保存的机密信息。 它会降低图形学习模型的性能。 在本文中, 我们调查如何在图形边缘实施差异隐私并观察实验中的性能下降。 我们还注意到, 图形边缘的隐私差异会让声音进入图离图的接近度, 这是图中对比性学习中的图形增强值之一。 我们为此建议利用图形对比性学习的优势来减轻差异隐私造成的性能下降。 广泛实验用几个具有代表性的图表模型模型和广泛使用的数据集进行, 表明对比性学习确实减轻了模型因差异隐私造成的性能下降。