Classification tasks on labeled graph-structured data have many important applications ranging from social recommendation to financial modeling. Deep neural networks are increasingly being used for node classification on graphs, wherein nodes with similar features have to be given the same label. Graph convolutional networks (GCNs) are one such widely studied neural network architecture that perform well on this task. However, powerful link-stealing attacks on GCNs have recently shown that even with black-box access to the trained model, inferring which links (or edges) are present in the training graph is practical. In this paper, we present a new neural network architecture called LPGNet for training on graphs with privacy-sensitive edges. LPGNet provides differential privacy (DP) guarantees for edges using a novel design for how graph edge structure is used during training. We empirically show that LPGNet models often lie in the sweet spot between providing privacy and utility: They can offer better utility than "trivially" private architectures which use no edge information (e.g., vanilla MLPs) and better resilience against existing link-stealing attacks than vanilla GCNs which use the full edge structure. LPGNet also offers consistently better privacy-utility tradeoffs than DPGCN, which is the state-of-the-art mechanism for retrofitting differential privacy into conventional GCNs, in most of our evaluated datasets.
翻译:标签图形结构数据的分类任务有许多重要的应用,从社会建议到金融模型。深神经网络正在越来越多地用于图表节点分类,在图表中,具有类似特征的节点必须标为相同的标签。图表进化网络(GCNs)是经过广泛研究的神经网络结构之一,在这项工作上表现良好。然而,最近对GCNs的强有力的链接追踪攻击显示,即使黑箱访问经过培训的模式,也表明培训图中存在哪些链接(或边缘)是实用的。在本文中,我们提出了一个称为LPGNet的新的神经网络结构,用于对具有隐私敏感性边缘的图表进行培训。LPGNet提供不同的隐私保障,使用新的设计来保证在培训中如何使用图形边缘结构。我们的经验显示,LPGNet模型往往位于提供隐私和实用之间的甜点:它们比没有边缘信息的“奇异”私人结构(例如,VanCNLPs)更实用。 在现有的GPG-PG的网络结构中,比我们G-G-PS-R的顶端结构更能的弹性反应能力更强。