Graph neural networks (GNNs) have witnessed significant adoption in the industry owing to impressive performance on various predictive tasks. Performance alone, however, is not enough. Any widely deployed machine learning algorithm must be robust to adversarial attacks. In this work, we investigate this aspect for GNNs, identify vulnerabilities, and link them to graph properties that may potentially lead to the development of more secure and robust GNNs. Specifically, we formulate the problem of task and model agnostic evasion attacks where adversaries modify the test graph to affect the performance of any unknown downstream task. The proposed algorithm, GRAND ($Gr$aph $A$ttack via $N$eighborhood $D$istortion) shows that distortion of node neighborhoods is effective in drastically compromising prediction performance. Although neighborhood distortion is an NP-hard problem, GRAND designs an effective heuristic through a novel combination of Graph Isomorphism Network with deep $Q$-learning. Extensive experiments on real datasets show that, on average, GRAND is up to $50\%$ more effective than state of the art techniques, while being more than $100$ times faster.
翻译:由于在各种预测性任务上的表现令人印象深刻,因此神经网络(GNNs)在工业中被大量采用。但是,仅仅表现是不够的。任何广泛部署的机器学习算法都必须对对抗性攻击十分有力。在这项工作中,我们调查GNNs的这一方面,查明脆弱性,并将其与可能导致发展更安全和更强大的GNNs的图形属性联系起来。具体地说,我们制定了任务和模式性规避攻击问题,其中对手修改试验图以影响任何未知下游任务的业绩。拟议的GRAND(GRAND$Gr$aph $Attack by $N$Neghearborhood $D$D$sistortion) 显示,对节点邻居的扭曲能够有效地极大地损害预测性能。尽管社区扭曲是一个NP-硬的问题,GAND设计了一种有效的超自然现象,通过一种新型的、有深度美元学习的图形地貌网络组合。关于真实数据集的广泛实验显示,平均来说,GRAND(GAND)比艺术水平高出50美元,同时速度超过100美元。