Graph Neural Networks (GNNs), which generalize traditional deep neural networks or graph data, have achieved state-of-the-art performance on several graph analytical tasks like node classification, link prediction, or graph classification. We focus on how trained GNN models could leak information about the \emph{member} nodes that they were trained on. We introduce two realistic inductive settings for carrying out a membership inference (MI) attack on GNNs. While choosing the simplest possible attack model that utilizes the posteriors of the trained model, we thoroughly analyze the properties of GNNs which dictate the differences in their robustness towards MI attack. The surprising and worrying fact is that the attack is successful even if the target model generalizes well. While in traditional machine learning models, overfitting is considered the main cause of such leakage, we show that in GNNs the additional structural information is the major contributing factor. We support our findings by extensive experiments on four representative GNN models. On a positive note, we identify properties of certain models which make them less vulnerable to MI attacks than others.
翻译:剖面神经网络(GNNs),它概括了传统的深神经网络或图形数据,在节点分类、链接预测或图形分类等数项图表分析任务中取得了最先进的性能。我们集中关注经过培训的GNN模型如何能泄露关于它们所训练的\emph{member}节点的信息。我们引入了两种现实的感应环境,用于对GNNs进行成员推论(MI)攻击。我们选择了最简单的攻击模式,利用了经过培训的模型的后人,但我们彻底分析了GNN的特性,这些特性决定了它们对MI攻击的强力。令人惊讶和令人担忧的是,即使目标模型非常笼统,攻击也是成功的。在传统的机器学习模型中,过度使用这种渗漏的主要原因是我们发现,在GNNNs中,额外的结构信息是主要的促成因素。我们通过对四种具有代表性的GNNM模型进行广泛的实验来支持我们的调查结果。我们肯定地指出,某些模型的特性使得它们比其他模型更容易受到MI攻击。