Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph learning tasks. However, recent studies show that GNNs are vulnerable to both test-time evasion and training-time poisoning attacks that perturb the graph structure. While existing attack methods have shown promising attack performance, we would like to design an attack framework to further enhance the performance. In particular, our attack framework is inspired by certified robustness, which was originally used by defenders to defend against adversarial attacks. We are the first, from the attacker perspective, to leverage its properties to better attack GNNs. Specifically, we first derive nodes' certified perturbation sizes against graph evasion and poisoning attacks based on randomized smoothing, respectively. A larger certified perturbation size of a node indicates this node is theoretically more robust to graph perturbations. Such a property motivates us to focus more on nodes with smaller certified perturbation sizes, as they are easier to be attacked after graph perturbations. Accordingly, we design a certified robustness inspired attack loss, when incorporated into (any) existing attacks, produces our certified robustness inspired attack counterpart. We apply our framework to the existing attacks and results show it can significantly enhance the existing base attacks' performance.
翻译:图表神经网络(GNNs)在许多图表学习任务中取得了最先进的表现。然而,最近的研究表明,GNNS很容易受到测试-时间规避和训练-时间中毒袭击的伤害,这些袭击都干扰了图形结构。虽然现有的攻击方法显示了有希望的攻击性,但我们希望设计一个攻击性框架,以进一步加强攻击性表现。特别是,我们的攻击性框架受到经认证的强力的启发,这些强力最初被捍卫者用来防御对抗性攻击。从攻击者的角度来说,我们是第一个利用其特性更好地攻击GNNs的人。具体地说,我们首先根据随机的平滑分别得出经认证的图表规避和中毒袭击的侵入性能。一个较大的经认证的节点表明,从理论上说,这个节点对图形的干扰性更强。这种特性激励我们更多地关注那些经认证的较小强度的节点,因为它们在图形触动性攻击之后更容易受到攻击。因此,我们设计经认证的强力攻击性攻击损失,在纳入(任何)现有攻击时,我们现有的反攻击性攻击性框架时,我们可以使用。</s>