Recent advances in protecting node privacy on graph data and attacking graph neural networks (GNNs) gain much attention. The eye does not bring these two essential tasks together yet. Imagine an adversary can utilize the powerful GNNs to infer users' private labels in a social network. How can we adversarially defend against such privacy attacks while maintaining the utility of perturbed graphs? In this work, we propose a novel research task, adversarial defenses against GNN-based privacy attacks, and present a graph perturbation-based approach, NetFense, to achieve the goal. NetFense can simultaneously keep graph data unnoticeability (i.e., having limited changes on the graph structure), maintain the prediction confidence of targeted label classification (i.e., preserving data utility), and reduce the prediction confidence of private label classification (i.e., protecting the privacy of nodes). Experiments conducted on single- and multiple-target perturbations using three real graph data exhibit that the perturbed graphs by NetFense can effectively maintain data utility (i.e., model unnoticeability) on targeted label classification and significantly decrease the prediction confidence of private label classification (i.e., privacy protection). Extensive studies also bring several insights, such as the flexibility of NetFense, preserving local neighborhoods in data unnoticeability, and better privacy protection for high-degree nodes.
翻译:在图表数据保护节点隐私和攻击图形神经网络(GNNs)方面的最新进展引起了人们的极大关注。 眼睛并没有将这两个基本任务联系在一起。 想象一下对手能够利用强大的GNNs在社交网络中推断用户的私人标签。 我们如何在维护扰动图的效用的同时对抗性地保护这类隐私攻击? 在这项工作中,我们提议了一项新的研究任务,即对抗基于GNN的隐私袭击的对抗性防御,并展示一个基于图表的侵扰性方法 NetFense,以实现这一目标。 NetFense可以同时保持图形数据的不可知性(即,图形结构变化有限 ), 维持有针对性的标签分类(即,维护数据效用) 的预测性信心, 降低私人标签分类(即保护节点的隐私 ) 的预测性。 利用三种真实的图表对单项和多重目标的入侵性进行实验,表明NetFense的隐私图能有效维护数据效用( i. 建模性不可知性 ), 也大大降低对目标性隐私分类的信心( ) 、 高度的保密性、 本地标签的可靠性研究 和高度数据保护 。