Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness and can therefore be easily fooled. Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance. However, the high complexity of time and space makes them unmanageable for large scale graphs and becomes the major bottleneck that prevents the practical usage. We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows. In this work, we propose an efficient Simplified Gradient-based Attack (SGA) method to bridge this gap. SGA can cause the GNNs to misclassify specific target nodes through a multi-stage attack framework, which needs only a much smaller subgraph. In addition, we present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data. We evaluate our attack method on four real-world graph networks by attacking several commonly used GNNs. The experimental results demonstrate that SGA can achieve significant time and memory efficiency improvements while maintaining competitive attack performance compared to state-of-art attack techniques. Codes are available via: https://github.com/EdisonLeeeee/SGAttack.
翻译:最近的研究表明,图形神经网络(GNNS)由于缺乏强健性而容易受到干扰,因此很容易被骗。目前,攻击GNNS的大多数工作主要是使用梯度信息来引导攻击并取得杰出的性能。然而,时间和空间的高度复杂性使得它们无法用于大型图表,成为阻碍实际使用的主要瓶颈。我们争辩说,主要原因是它们必须使用整张图表来进行攻击,导致随着数据规模的扩大,时间和空间的复杂程度不断增大。我们在此工作中建议一种高效的简化的渐进式重力攻击(SGA)方法来弥补这一差距。SGA可以使GNNS通过多阶段攻击框架错误地分类具体的目标节点,而多阶段攻击框架只需要一个小得多的子图。此外,我们提出了一个名为“度感应力变化(DAC)”的实用指标,以测量对图形数据攻击的影响。我们用四个真实世界的图形网络上的攻击方法通过攻击几个通用的GNPSNS。实验结果显示SGA可以通过竞争性攻击技术实现SGA/QA的成绩。