Graph neural networks (GNNs) have recently gained much attention for node and graph classification tasks on graph-structured data. However, multiple recent works showed that an attacker can easily make GNNs predict incorrectly via perturbing the graph structure, i.e., adding or deleting edges in the graph. We aim to defend against such attacks via developing certifiably robust GNNs. Specifically, we prove the first certified robustness guarantee of any GNN for both node and graph classifications against structural perturbation. Moreover, we show that our certified robustness guarantee is tight. Our results are based on a recently proposed technique called randomized smoothing, which we extend to graph data. We also empirically evaluate our method for both node and graph classifications on multiple GNNs and multiple benchmark datasets. For instance, on the Cora dataset, Graph Convolutional Network with our randomized smoothing can achieve a certified accuracy of 0.49 when the attacker can arbitrarily add/delete at most 15 edges in the graph.
翻译:图表神经网络(GNNs)最近在图形结构数据上的节点和图形分类任务中得到了很多关注。 然而,最近多项工程显示,攻击者很容易通过干扰图形结构,即图中增加或删除边缘,使GNNs预测错误。我们的目标是通过开发可证实可靠的GNNs来防范这类攻击。具体地说,我们证明,任何GNN都为节点和图形分类提供经认证的可靠度保证,以抵御结构性扰动。此外,我们还表明,我们经认证的稳健性保证非常严格。我们的结果基于最近提出的称为随机平滑的技术,我们将其扩展至图形数据。我们还从经验上评估了我们关于多个GNNs和多个基准数据集的节点和图形分类方法。例如,在科拉数据集上,我们随机光滑动的图动网络可以达到经认证的0.49的准确度,当攻击者可以在图中最多15个边缘任意添加/删除时。