Graph Neural Networks (GNNs) are increasingly important given their popularity and the diversity of applications. Yet, existing studies of their vulnerability to adversarial attacks rely on relatively small graphs. We address this gap and study how to attack and defend GNNs at scale. We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation despite optimizing over a number of parameters which is quadratic in the number of nodes. We show that common surrogate losses are not well-suited for global attacks on GNNs. Our alternatives can double the attack strength. Moreover, to improve GNNs' reliability we design a robust aggregation function, Soft Median, resulting in an effective defense at all scales. We evaluate our attacks and defense with standard GNNs on graphs more than 100 times larger compared to previous work. We even scale one order of magnitude further by extending our techniques to a scalable GNN.
翻译:神经网络(GNNs)由于其广受欢迎和应用的多样性,其重要性日益增强。然而,关于它们易受对抗性攻击的脆弱性的现有研究依赖于相对较小的图表。我们解决了这一差距,并研究如何大规模攻击和捍卫GNS。我们提议了两起超度意识第一阶优化攻击,尽管在节点数量方面对若干四级参数进行了优化,但仍保持了有效的代表性。我们表明,共同代用损失并不适合于全球攻击GNS。我们的替代方法可以使攻击强度增加一倍。此外,为了提高GNNNs的可靠性,我们设计了一个强大的聚合功能,Soft Meden, 在所有尺度上都产生了有效的防御。我们用标准GNNs的图形对攻击和防御进行了评估,这些图形比以往工作大100倍以上。我们甚至将技术推广到可扩展的GNNN值,从而进一步提升一个数量级。