Graph Neural Networks (GNNs) have achieved promising performance in various real-world applications. Building a powerful GNN model is not a trivial task, as it requires a large amount of training data, powerful computing resources, and human expertise on fine-tuning the model. What is more, with the development of adversarial attacks, e.g., model stealing attacks, GNNs raise challenges to model authentication. To avoid copyright infringement on GNNs, it is necessary to verify the ownership of the GNN models. In this paper, we present a watermarking framework for GNNs for both graph and node classification tasks. We 1) design two strategies to generate watermarked data for the graph classification and one for the node classification task, 2) embed the watermark into the host model through training to obtain the watermarked GNN model, and 3) verify the ownership of the suspicious model in a black-box setting. The experiments show that our framework can verify the ownership of GNN models with a very high probability (around $100\%$) for both tasks. In addition, we experimentally show that our watermarking approach is still effective even when considering suspicious models obtained from different architectures than the owner's.
翻译:神经网络图(GNN)在各种现实应用中取得了有希望的绩效。 建立一个强大的GNN模型并不是一件小事,因为它需要大量的培训数据、强大的计算资源和精细调整模型的人力专长。 此外,随着对抗性攻击的开发,例如模型盗窃袭击,GNNS对模型认证提出了挑战。为了避免对GNN的版权侵犯,有必要核实GNN模型的所有权。在本文中,我们为GNN的图形和节点分类任务提出了一个水标记框架。我们1)设计了两个战略,为图表分类和节点分类任务生成水标记数据,一个战略是节点分类任务,2)通过培训将水标记嵌入主机模型,以获得水标记GNN模型模型,3)在黑箱设置中核查可疑模型的所有权。实验表明,我们的框架可以非常高的概率(约100美元)验证GNN模型对两项任务的所有权。此外,我们实验表明,我们的水标记方法仍然有效,即使考虑从不同建筑中获取的可疑模型。