Graph Neural Networks (GNNs) have drawn significant attentions over the years and been broadly applied to essential applications requiring solid robustness or vigorous security standards, such as product recommendation and user behavior modeling. Under these scenarios, exploiting GNN's vulnerabilities and further downgrading its performance become extremely incentive for adversaries. Previous attackers mainly focus on structural perturbations or node injections to the existing graphs, guided by gradients from the surrogate models. Although they deliver promising results, several limitations still exist. For the structural perturbation attack, to launch a proposed attack, adversaries need to manipulate the existing graph topology, which is impractical in most circumstances. Whereas for the node injection attack, though being more practical, current approaches require training surrogate models to simulate a white-box setting, which results in significant performance downgrade when the surrogate architecture diverges from the actual victim model. To bridge these gaps, in this paper, we study the problem of black-box node injection attack, without training a potentially misleading surrogate model. Specifically, we model the node injection attack as a Markov decision process and propose Gradient-free Graph Advantage Actor Critic, namely G2A2C, a reinforcement learning framework in the fashion of advantage actor critic. By directly querying the victim model, G2A2C learns to inject highly malicious nodes with extremely limited attacking budgets, while maintaining a similar node feature distribution. Through our comprehensive experiments over eight acknowledged benchmark datasets with different characteristics, we demonstrate the superior performance of our proposed G2A2C over the existing state-of-the-art attackers. Source code is publicly available at: https://github.com/jumxglhf/G2A2C}.
翻译:神经网络(GNNs)多年来引起了人们的极大关注,并被广泛应用于需要稳健稳健或严格的安全标准的基本应用程序,如产品建议和用户行为模型。在这些情况下,利用GNN的脆弱性并进一步降低其性能成为对手的极强激励因素。前攻击者主要侧重于结构扰动或向现有图表注入节点,以替代模型的梯度为指导。虽然它们提供了有希望的结果,但仍然存在一些限制。对于结构性扰动攻击,要发起拟议的攻击,对手需要操纵现有的图表表层学,这在多数情况下是不切实际的。而对于注射攻击,尽管更加实用,但目前的方法需要培训代理模型模型,以模拟白箱设置,这导致在表面结构与实际受害者模型脱差时,在本文中,我们研究黑盒针注射攻击的问题,在不训练一个可能误导的当前国家代码模型的情况下,我们需要将现有图表表态攻击模拟成一个马克托克2,在最不切实际的基点预算中, 直接学习G-C的高级受害者。我们用Sqental Adal-creal Streal Frial Friforal Profilence Flistal acal acal sess acess acess acess a lacal decilence a recal decess a lagal decildal decal laction a laction a laction a lade laction a laction a laction a lader lacal ladal lad sildal laticild.