Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure. However, because manual inspection of a graph is difficult, it is unclear if the studied perturbations always preserve a core assumption of adversarial examples: that of unchanged semantic content. To address this problem, we introduce a more principled notion of an adversarial graph, which is aware of semantic content change. Using Contextual Stochastic Block Models (CSBMs) and real-world graphs, our results uncover: $i)$ for a majority of nodes the prevalent perturbation models include a large fraction of perturbed graphs violating the unchanged semantics assumption; $ii)$ surprisingly, all assessed GNNs show over-robustness - that is robustness beyond the point of semantic change. We find this to be a complementary phenomenon to adversarial examples and show that including the label-structure of the training graph into the inference process of GNNs significantly reduces over-robustness, while having a positive effect on test accuracy and adversarial robustness. Theoretically, leveraging our new semantics-aware notion of robustness, we prove that there is no robustness-accuracy tradeoff for inductively classifying a newly added node.
翻译:许多研究表明,图神经网络(GNN)的节点级预测对于图形结构的轻微漏洞(通常称为对抗性方法)是不鲁棒的。但是,由于手动检查图形很困难,因此不清楚所研究的扰动是否始终保持对抗性示例的核心假设:即未更改语义内容。为解决这个问题,我们介绍了一个更有原则性的对抗性图形概念,该概念了解与语义内容的变化。使用上下文随机块模型(CSBM)和真实世界的图形,我们的结果揭示:$i$)对于大多数节点,普遍扰动模型包括大量扰动的图形,违反了未更改语义假设;$ii$)令人惊讶的是,所有已评估的GNN都表现出过分鲁棒 - 即超出了语义变化的程度的鲁棒性。我们发现这是对抗性示例的补充现象,并表明,在GNN的推断过程中将训练图的标签结构纳入其中可以显着降低过度鲁棒性,同时对测试准确性和对抗鲁棒性有积极的影响。在理论上,利用我们新的语义意识到的鲁棒性概念,我们证明了在归纳分类新添加的节点时没有鲁棒性-准确性的权衡。