Benefiting from the message passing mechanism, Graph Neural Networks (GNNs) have been successful on flourish tasks over graph data. However, recent studies have shown that attackers can catastrophically degrade the performance of GNNs by maliciously modifying the graph structure. A straightforward solution to remedy this issue is to model the edge weights by learning a metric function between pairwise representations of two end nodes, which attempts to assign low weights to adversarial edges. The existing methods use either raw features or representations learned by supervised GNNs to model the edge weights. However, both strategies are faced with some immediate problems: raw features cannot represent various properties of nodes (e.g., structure information), and representations learned by supervised GNN may suffer from the poor performance of the classifier on the poisoned graph. We need representations that carry both feature information and as mush correct structure information as possible and are insensitive to structural perturbations. To this end, we propose an unsupervised pipeline, named STABLE, to optimize the graph structure. Finally, we input the well-refined graph into a downstream classifier. For this part, we design an advanced GCN that significantly enhances the robustness of vanilla GCN without increasing the time complexity. Extensive experiments on four real-world graph benchmarks demonstrate that STABLE outperforms the state-of-the-art methods and successfully defends against various attacks.
翻译:图表神经网络(GNNs)从传递信息的机制中受益,在图表数据上成功地完成了繁忙的任务;然而,最近的研究表明,攻击者通过恶意修改图形结构,可以灾难性地降低GNNs的性能; 解决这一问题的一个直接解决办法是,在两个端点的对称表达法之间,学习一种衡量功能,通过两个端点的对称表达法,试图给对抗边缘分配低权重; 现有的方法要么使用原始特征,要么使用受监督的GNNs所学的表示法来模拟边缘重量; 然而,这两种战略都面临着一些直接的问题:原始特征不能代表节点(例如结构信息)的各种特性,而受监督的GNNS所学会的表述可能因有毒图形的分类者表现不佳而受到损害。 我们需要一种既包含特征信息,又尽可能缩略正确的结构信息,又对结构干扰不敏感。 为此,我们提议建立一个不受监督的管道,名为STEB,以优化图表结构。 最后,我们将精制的图表输入到一个下游的分类器的特性(例如,结构信息信息信息)不能代表受监管的GNNNNN公司所学的演示的演示的特征,而不受下游次级分析者在受毒害的图表中,因此,我们设计了一个更强的深度的深度的深度的深度的深度的模型的模型的模型的深度的深度的深度的模型的模型的模型的模型的模型的模型。