Graph neural networks (GNNs) have shown broad applicability in a variety of domains. These domains, e.g., social networks and product recommendations, are fertile ground for malicious users and behavior. In this paper, we show that GNNs are vulnerable to the extremely limited (and thus quite realistic) scenarios of a single-node adversarial attack, where the perturbed node cannot be chosen by the attacker. That is, an attacker can force the GNN to classify any target node to a chosen label, by only slightly perturbing the features or the neighbor list of another single arbitrary node in the graph, even when not being able to select that specific attacker node. When the adversary is allowed to select the attacker node, these attacks are even more effective. We demonstrate empirically that our attack is effective across various common GNN types (e.g., GCN, GraphSAGE, GAT, GIN) and robustly optimized GNNs (e.g., Robust GCN, SM GCN, GAL, LAT-GCN), outperforming previous attacks across different real-world datasets both in a targeted and non-targeted attacks. Our code is available at https://github.com/benfinkelshtein/SINGLE .
翻译:神经神经网络(GNNs)在多个领域显示出广泛适用性。这些领域,例如社交网络和产品建议,是恶意用户和行为的肥沃土壤。在本文中,我们表明,GNNs很容易受到单点对立攻击的极为有限(因而相当现实)的情景的影响,攻击者无法从其中选择受扰动的节点。也就是说,攻击者可以迫使GNN将任何目标节点划入选定的标签,仅略微破坏图中另一个任意节点的特征或相邻列表,即使无法选择特定攻击者节点。当敌人被允许选择攻击者节点时,这些攻击就更加有效。我们从经验上表明,我们的攻击在各种通用的GNN(例如,GCN,GCN,GAT,GIN)和强力优化的GNNS(例如,Robust GCN, SM GCN, GAL, LAT-GCN) 超越了我们以前在现实/GENS) 和无目标的GENS 的GS/GS) 都可操作的GEN 。我们以前在现实/ Neal-GERS/GERS) 上都可操作的GNS。