Graph Neural Networks (GNNs), which generalize traditional deep neural networks on graph data, have achieved state-of-the-art performance on several graph analytical tasks. We focus on how trained GNN models could leak information about the \emph{member} nodes that they were trained on. We introduce two realistic settings for performing a membership inference (MI) attack on GNNs. While choosing the simplest possible attack model that utilizes the posteriors of the trained model (black-box access), we thoroughly analyze the properties of GNNs and the datasets which dictate the differences in their robustness towards MI attack. While in traditional machine learning models, overfitting is considered the main cause of such leakage, we show that in GNNs the additional structural information is the major contributing factor. We support our findings by extensive experiments on four representative GNN models. To prevent MI attacks on GNN, we propose two effective defenses that significantly decreases the attacker's inference by up to 60% without degradation to the target model's performance. Our code is available at https://github.com/iyempissy/rebMIGraph.
翻译:将传统的深神经网络(GNNs)归纳为图表数据中的传统深神经网络(GNNs ), 已经在几个图表分析任务中取得了最先进的表现。 我们集中研究经过训练的GNN模型如何泄露关于它们所训练的网点的信息。 我们引入了两种现实的环境来对GNNs进行会籍推断(MI)攻击。 我们选择了利用经过训练的模式(黑箱访问)的后人进行最简单的攻击模式,但我们彻底分析了GNNs的特性和显示其对MI攻击的强力差异的数据集。 在传统的机器学习模型中, 过度匹配被认为是这种泄漏的主要原因, 我们显示在GNNs中, 额外的结构信息是主要的促成因素。 我们支持我们通过对四个具有代表性的GNNNs模型进行广泛的实验而得出的结果。 为了防止MI对GNS的攻击, 我们提议两种有效的防御方法, 将攻击者的推断值显著地降低到60%,而不会降解到目标模型的性能。 我们的代码可以在 https://github.com/imp/ immerphys. gra. gra.