Graph neural networks (GNNs) have achieved tremendous success in the task of graph classification and diverse downstream real-world applications. Despite their success, existing approaches are either limited to structure attacks or restricted to local information. This calls for a more general attack framework on graph classification, which faces significant challenges due to the complexity of generating local-node-level adversarial examples using the global-graph-level information. To address this "global-to-local" problem, we present a general framework CAMA to generate adversarial examples by manipulating graph structure and node features in a hierarchical style. Specifically, we make use of Graph Class Activation Mapping and its variant to produce node-level importance corresponding to the graph classification task. Then through a heuristic design of algorithms, we can perform both feature and structure attacks under unnoticeable perturbation budgets with the help of both node-level and subgraph-level importance. Experiments towards attacking four state-of-the-art graph classification models on six real-world benchmarks verify the flexibility and effectiveness of our framework.
翻译:图表神经网络(GNNs)在图形分类任务和各种下游现实世界应用方面取得了巨大成功。尽管取得了成功,但现有的方法要么局限于结构攻击,要么局限于局部信息。这要求对图形分类有一个更一般性的攻击框架,由于使用全球绘图级信息生成本地-本地水平对抗性实例的复杂性而面临重大挑战。为了解决这个“全球到本地”问题,我们提出了一个总体框架,即CAMA,通过在等级风格中操纵图形结构和节点特征来生成对抗性实例。具体地说,我们利用图表分类活动绘图及其变量来产生与图表分类任务相应的节点重要性。然后,通过超理论的算法设计,我们可以借助节点和子级别的重要性,在不可注意的扰动性预算下进行特征和结构攻击。我们尝试在六个现实世界基准上攻击四个最先进的图表分类模型模型,可以验证我们框架的灵活性和有效性。