Recent studies show that Graph Neural Networks(GNNs) are vulnerable and easily fooled by small perturbations, which has raised considerable concerns for adapting GNNs in various safety-critical applications. In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack(GIA), in which the adversary poisons the graph by injecting fake nodes instead of modifying existing structures or node attributes. Inspired by findings that the adversarial attacks are related to the increased heterophily on perturbed graphs (the adversary tends to connect dissimilar nodes), we propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model. Specifically, the model generates pseudo-labels for unlabeled nodes in each round of training to reduce heterophilous edges of nodes with distinct labels. The cleaner graph is fed back to the model, producing more informative pseudo-labels. In such an iterative manner, model robustness is then promisingly enhanced. We present the theoretical analysis of the effect of homophilous augmentation and provide the guarantee of the proposal's validity. Experimental results empirically demonstrate the effectiveness of CHAGNN in comparison with recent state-of-the-art defense methods on diverse real-world datasets.
翻译:最近的研究显示,图形神经网络(GNNs)很脆弱,容易被小型扰动性干扰所愚弄,这引起了人们对使GNNs适应各种安全关键应用的极大关切。在这项工作中,我们侧重于新出现的但至关重要的攻击,即GA图注射攻击(GIA),在这种攻击中,对手用注射假节点而不是改变现有结构或节点属性的方式毒害了图。受到以下结论的启发,即对准攻击与受扰动的图表(对手倾向于连接不同节点)上的偏差增加有关,因此我们提议了一个总防御框架(CHAGNN)来对付GIA。我们通过对图形数据和模型的共振扩增合作,提出一个总体防御框架。具体地说,该模型为每轮培训中的无标签节点制作假标签,以减少带有不同标签的节点的异边缘。更清洁的图被反馈到模型,产生更丰富的假假假的假标签。然后,模型的坚固度大有希望得到加强。我们介绍了对同性放大增强的聚合性增强作用的理论分析,并且保证了最新提案的对比结果的有效性。