Many real-world data come in the form of graphs. Graph neural networks (GNNs), a new family of machine learning (ML) models, have been proposed to fully leverage graph data to build powerful applications. In particular, the inductive GNNs, which can generalize to unseen data, become mainstream in this direction. Machine learning models have shown great potential in various tasks and have been deployed in many real-world scenarios. To train a good model, a large amount of data as well as computational resources are needed, leading to valuable intellectual property. Previous research has shown that ML models are prone to model stealing attacks, which aim to steal the functionality of the target models. However, most of them focus on the models trained with images and texts. On the other hand, little attention has been paid to models trained with graph data, i.e., GNNs. In this paper, we fill the gap by proposing the first model stealing attacks against inductive GNNs. We systematically define the threat model and propose six attacks based on the adversary's background knowledge and the responses of the target models. Our evaluation on six benchmark datasets shows that the proposed model stealing attacks against GNNs achieve promising performance.
翻译:许多真实世界的数据以图表形式出现。 图像神经网络(GNNs)是一个机器学习模型的新大家庭,已经提出要充分利用图形数据来建立强大的应用程序。 特别是,能够向无形数据推广的感知性GNNs,成为这一方向的主流。 机器学习模型在各种任务中表现出巨大的潜力,并在许多现实世界的情景中部署。 要训练一个良好的模型,需要大量的数据和计算资源,从而产生宝贵的知识产权。 以前的研究表明,ML模型容易发生模型盗窃攻击,目的是窃取目标模型的功能。 然而,其中多数侧重于经过图像和文本培训的模型。 另一方面,几乎没有注意经过图表数据培训的模型,即GNNs。 在本文中,我们通过提出第一个模型盗窃对感知性GNs的攻击来填补差距。 我们系统地界定了威胁模型,并根据对手的背景知识和目标模型的反应提出六起攻击。 我们对六个基准数据集的评估显示,对GNS进行有希望的打击。