Graph neural networks (GNNs) have attracted increasing interests. With broad deployments of GNNs in real-world applications, there is an urgent need for understanding the robustness of GNNs under adversarial attacks, especially in realistic setups. In this work, we study the problem of attacking GNNs in a restricted and realistic setup, by perturbing the features of a small set of nodes, with no access to model parameters and model predictions. Our formal analysis draws a connection between this type of attacks and an influence maximization problem on the graph. This connection not only enhances our understanding on the problem of adversarial attack on GNNs, but also allows us to propose a group of effective and practical attack strategies. Our experiments verify that the proposed attack strategies significantly degrade the performance of three popular GNN models and outperform baseline adversarial attack strategies.
翻译:由于在现实世界应用中广泛部署GNNs,迫切需要了解GNNs在对抗性攻击下的强势性,特别是在现实的设置中。在这项工作中,我们研究在限制和现实的设置下攻击GNNs的问题,方法是干扰一组小节点的特点,无法获得模型参数和模型预测。我们的正式分析将这类攻击与该图上的影响最大化问题联系起来。这种联系不仅增进了我们对对GNNs对抗性攻击问题的了解,而且还使我们能够提出一套有效而实际的攻击战略。我们的实验证实,拟议的攻击战略大大降低了三种广受欢迎的GNN模式的性能和超模范的基线对抗性攻击战略。