While deep neural networks have achieved great success in graph analysis, recent work has shown that they are vulnerable to adversarial attacks. Compared with adversarial attacks on image classification, performing adversarial attacks on graphs is more challenging because of the discrete and non-differential nature of the adjacent matrix for a graph. In this work, we propose Cluster Attack -- a Graph Injection Attack (GIA) on node classification, which injects fake nodes into the original graph to degenerate the performance of graph neural networks (GNNs) on certain victim nodes while affecting the other nodes as little as possible. We demonstrate that a GIA problem can be equivalently formulated as a graph clustering problem; thus, the discrete optimization problem of the adjacency matrix can be solved in the context of graph clustering. In particular, we propose to measure the similarity between victim nodes by a metric of Adversarial Vulnerability, which is related to how the victim nodes will be affected by the injected fake node, and to cluster the victim nodes accordingly. Our attack is performed in a practical and unnoticeable query-based black-box manner with only a few nodes on the graphs that can be accessed. Theoretical analysis and extensive experiments demonstrate the effectiveness of our method by fooling the node classifiers with only a small number of queries.
翻译:虽然深心神经网络在图形分析方面取得了巨大成功,但最近的工作表明,它们很容易受到对抗性攻击。与对图像分类的对抗性攻击相比,对图形进行对抗性攻击更具挑战性,因为相邻矩阵的离散性和无差别性,因此对图形进行对抗性攻击。在这项工作中,我们提议在节点分类上进行群集攻击 -- -- 一组喷射攻击(GIA),在原图表中注入假节点,使某些受害人的图形神经网络节点的性能退化,同时尽可能少影响其他节点。我们证明,一个GIA问题可以等同于一个图形组合问题;因此,相邻矩阵的离散优化问题可以在图形组合中解决。我们特别建议用抗逆脆弱性的度度度度测量受害者节点之间的相似性,这与受害人神经网点如何受注入的假节点影响有关,并相应地将受害者节点组合成不同的节点。我们的攻击是在一个实际和不可察觉的、不可辨知的图像组合问题中进行,我们只能用一个小的、基于查询的黑格的实验方式对数字进行这样的分析。我们只能用一个广泛的数字来测量。