In this paper, we study the problem of learning Graph Neural Networks (GNNs) with Differential Privacy (DP). We propose a novel differentially private GNN based on Aggregation Perturbation (GAP), which adds stochastic noise to the GNN's aggregation function to statistically obfuscate the presence of a single edge (edge-level privacy) or a single node and all its adjacent edges (node-level privacy). Tailored to the specifics of private learning, GAP's new architecture is composed of three separate modules: (i) the encoder module, where we learn private node embeddings without relying on the edge information; (ii) the aggregation module, where we compute noisy aggregated node embeddings based on the graph structure; and (iii) the classification module, where we train a neural network on the private aggregations for node classification without further querying the graph edges. GAP's major advantage over previous approaches is that it can benefit from multi-hop neighborhood aggregations, and guarantees both edge-level and node-level DP not only for training, but also at inference with no additional costs beyond the training's privacy budget. We analyze GAP's formal privacy guarantees using R\'enyi DP and conduct empirical experiments over three real-world graph datasets. We demonstrate that GAP offers significantly better accuracy-privacy trade-offs than state-of-the-art DP-GNN approaches and naive MLP-based baselines. Our code is publicly available at https://github.com/sisaman/GAP.
翻译:在本文中,我们研究了学习具有差异隐私(DP)的图形神经网络(GNN)的问题。我们提出一个基于聚合干扰(GAP)的新颖的、有差异的私人GNN(GAP)模块,该模块在GNN的聚合功能中增加了随机噪音,在统计上模糊了单一边缘(尖端隐私)或单一节点的存在及其所有相邻边缘(诺德隐私)的存在。根据私人学习的具体情况,GAP的新架构由三个不同的模块组成:(一) 编码模块,我们学习不依靠边缘信息而嵌入私人节点;(二) 聚合模块,我们根据图形结构计算杂乱的集合节点嵌入功能;(三) 分类模块,我们在私人网格上训练神经网络进行节点分类,而不进一步查询图表边缘。 GAP 以往方法的主要优势是,它可以从多窗口社区聚合中获益,并且保证边端和非端的准确嵌入级 DP(GGNA) 数据库不仅在公开预算上进行更精确的演示,而且还在实际的GA/DA上进行更精确的分析。