Many real-world networks are inherently decentralized. For example, in social networks, each user maintains a local view of a social graph, such as a list of friends and her profile. It is typical to collect these local views of social graphs and conduct graph learning tasks. However, learning over graphs can raise privacy concerns as these local views often contain sensitive information. In this paper, we seek to ensure private graph learning on a decentralized network graph. Towards this objective, we propose {\em Solitude}, a new privacy-preserving learning framework based on graph neural networks (GNNs), with formal privacy guarantees based on edge local differential privacy. The crux of {\em Solitude} is a set of new delicate mechanisms that can calibrate the introduced noise in the decentralized graph collected from the users. The principle behind the calibration is the intrinsic properties shared by many real-world graphs, such as sparsity. Unlike existing work on locally private GNNs, our new framework can simultaneously protect node feature privacy and edge privacy, and can seamlessly incorporate with any GNN with privacy-utility guarantees. Extensive experiments on benchmarking datasets show that {\em Solitude} can retain the generalization capability of the learned GNN while preserving the users' data privacy under given privacy budgets.
翻译:许多真实世界的网络本质上是分散的。例如,在社交网络中,每个用户都持有社会图的本地观点,例如朋友及其简介。典型的做法是收集社会图的本地观点,并开展图表学习任务。然而,通过图形学习可以引起隐私问题,因为这些本地观点往往包含敏感信息。在本文中,我们力求确保在分散的网络图上进行私人图形学习。为了实现这一目标,我们提议在图形神经网络(GNNS)的基础上建立一个新的隐私保护学习框架,并基于本地差异隐私边缘的隐私保障。 缩略图的核心是一套新的微妙机制,可以校准从用户收集的分散的图表中引入的噪音。校准的原则是许多真实世界图形的内在属性,例如音量。与本地私人GNNUS的现有工作不同,我们的新框架可以同时保护本地隐私隐私和边缘隐私,并且可以与任何GNN(GNN)在隐私保障基础上顺利地结合。关于基准数据隐私保障的大规模实验可以让用户在数据库中保持一般的隐私能力。