Deep neural networks, while generalize well, are known to be sensitive to small adversarial perturbations. This phenomenon poses severe security threat and calls for in-depth investigation of the robustness of deep learning models. With the emergence of neural networks for graph structured data, similar investigations are urged to understand their robustness. It has been found that adversarially perturbing the graph structure and/or node features may result in a significant degradation of the model performance. In this work, we show from a different angle that such fragility similarly occurs if the graph contains a few bad-actor nodes, which compromise a trained graph neural network through flipping the connections to any targeted victim. Worse, the bad actors found for one graph model severely compromise other models as well. We call the bad actors ``anchor nodes'' and propose an algorithm, named GUA, to identify them. Thorough empirical investigations suggest an interesting finding that the anchor nodes often belong to the same class; and they also corroborate the intuitive trade-off between the number of anchor nodes and the attack success rate. For the dataset Cora which contains 2708 nodes, as few as six anchor nodes will result in an attack success rate higher than 80\% for GCN and other three models.
翻译:深心神经网络虽然一般化,但众所周知,深心神经网络对小型对抗性扰动作用十分敏感。 这种现象构成严重的安全威胁,要求深入调查深学习模式的强健性。 随着图表结构数据神经网络的出现,敦促类似的调查了解其强健性。 已经发现,对抗性扰动图形结构和/或节点特征可能导致模型性能的显著退化。 在这项工作中,我们从不同的角度显示,如果图表中含有几个坏点节点,从而通过翻转与任何目标受害者的连接而损害经过训练的图形神经网络,这种脆弱性也同样发生。 更糟的是,发现一个图形模型模型模型模型的坏行为者也严重损害了其他模型。 我们叫坏方“ 钟节点”, 并提议一种算法, 名为 GUA, 来识别它们。 索罗特实调查显示, 一个有趣的发现, 锚点往往属于同一类别; 并且它们还证实, 固定点点数与攻击成功率之间的直觉交易是少数点和攻击成功率之间的。 对于一个高点的Coret Cora, 将包含808号模型的成功率, 作为其他3个标准。