Graph-structured data exist in numerous applications in real life. As a state-of-the-art graph neural network, the graph convolutional network (GCN) plays an important role in processing graph-structured data. However, a recent study reported that GCNs are also vulnerable to adversarial attacks, which means that GCN models may suffer malicious attacks with unnoticeable modifications of the data. Among all the adversarial attacks on GCNs, there is a special kind of attack method called the universal adversarial attack, which generates a perturbation that can be applied to any sample and causes GCN models to output incorrect results. Although universal adversarial attacks in computer vision have been extensively researched, there are few research works on universal adversarial attacks on graph structured data. In this paper, we propose a targeted universal adversarial attack against GCNs. Our method employs a few nodes as the attack nodes. The attack capability of the attack nodes is enhanced through a small number of fake nodes connected to them. During an attack, any victim node will be misclassified by the GCN as the attack node class as long as it is linked to them. The experiments on three popular datasets show that the average attack success rate of the proposed attack on any victim node in the graph reaches 83% when using only 3 attack nodes and 6 fake nodes. We hope that our work will make the community aware of the threat of this type of attack and raise the attention given to its future defense.
翻译:在现实生活中,许多应用中都存在图表结构数据。作为一个最先进的图形神经网络,图形革命网络(GCN)在处理图形结构数据方面起着重要作用。然而,最近的一项研究报告说,GCN还容易受到对抗性攻击,这意味着GCN模型可能受到恶意攻击,数据可能无法察觉地被修改。在对GCN的所有对抗性攻击中,有一种特殊的攻击方法,称为普遍对抗性攻击,它产生一种可应用于任何抽样并导致GCN模型输出错误结果的扰动性反应。虽然计算机视觉中的普遍对抗性攻击已经进行了广泛的研究,但在图形结构数据中,关于普遍对抗性攻击的研究工作却很少。在本文中,我们提议对GCN模型进行有针对性的普遍对抗性攻击,作为攻击的节点。攻击点的攻击能力通过少量的虚假节点得到增强。在攻击中,任何受害者节点都会被GCN错误地归类为攻击的节点,使GCN模型中的GCN模型产生错误的结果产生错误的结果。虽然对计算机视野中的普遍对抗性攻击性攻击性攻击,但对于图形结构数据没有多少研究。我们提出的攻击率将显示83个攻击率,因此才算算算算算算出攻击攻击的概率。