Graph Neural Networks (GNNs) have achieved promising performance in various real-world applications. Building a powerful GNN model is not a trivial task, as it requires a large amount of training data, powerful computing resources, and human expertise in fine-tuning the model. Moreover, with the development of adversarial attacks, e.g., model stealing attacks, GNNs raise challenges to model authentication. To avoid copyright infringement on GNNs, verifying the ownership of the GNN models is necessary. This paper presents a watermarking framework for GNNs for both graph and node classification tasks. We 1) design two strategies to generate watermarked data for the graph classification task and one for the node classification task, 2) embed the watermark into the host model through training to obtain the watermarked GNN model, and 3) verify the ownership of the suspicious model in a black-box setting. The experiments show that our framework can verify the ownership of GNN models with a very high probability (up to $99\%$) for both tasks. Finally, we experimentally show that our watermarking approach is robust against a state-of-the-art model extraction technique and four state-of-the-art defenses against backdoor attacks.
翻译:神经网图(GNNs)在现实世界的各种应用中取得了有希望的绩效。 建立一个强大的GNN模式并非一项微不足道的任务,因为它需要大量的培训数据、强大的计算资源和人的专门知识来微调模型。 此外,随着对抗性攻击的开发,例如模型盗窃袭击,GNNs对模型认证提出了挑战。为了避免对GNNs的版权侵犯,有必要核实GNN模式的所有权。本文为GNs的图形和节点分类任务提供了一个水标记框架。 我们1)设计了两种战略,为图表分类任务和节点分类任务生成水标记数据,而为节点分类任务设计了一种战略,2)通过培训将水标记嵌入主机模型,以获得水标记GNNN模型模型,3)在黑箱设置中核查可疑模型的所有权。实验表明,我们的框架可以对GNNN模型的所有权进行核实,对于这两项任务来说,可能性非常高(高达99美元)。最后,我们实验性地表明,我们的水标记方法对州级防御后门式防御技术以及四个州进行强。