Backdoor attacks represent a serious threat to neural network models. A backdoored model will misclassify the trigger-embedded inputs into an attacker-chosen target label while performing normally on other benign inputs. There are already numerous works on backdoor attacks on neural networks, but only a few works consider graph neural networks (GNNs). As such, there is no intensive research on explaining the impact of trigger injecting position on the performance of backdoor attacks on GNNs. To bridge this gap, we conduct an experimental investigation on the performance of backdoor attacks on GNNs. We apply two powerful GNN explainability approaches to select the optimal trigger injecting position to achieve two attacker objectives -- high attack success rate and low clean accuracy drop. Our empirical results on benchmark datasets and state-of-the-art neural network models demonstrate the proposed method's effectiveness in selecting trigger injecting position for backdoor attacks on GNNs. For instance, on the node classification task, the backdoor attack with trigger injecting position selected by GraphLIME reaches over $84 \%$ attack success rate with less than $2.5 \%$ accuracy drop
翻译:后门攻击是对神经网络模型的严重威胁。 后门攻击模型会错误地将触发器嵌入到攻击者选择的目标标签中, 而正常地使用其他良性输入。 已经有许多关于神经网络的后门攻击的工程, 但只有少数作品考虑图形神经网络( GNNs ) 。 因此, 在解释触发注射姿势对 GNNs 进行后门攻击的影响方面没有进行深入的研究。 为了缩小这一差距, 我们对GNNs的后门攻击进行实验性调查。 我们应用两种强大的GNN(GNN)解释方法来选择最佳触发姿势注射姿势以实现攻击者的两个目标 -- -- 高攻击成功率和低清洁精确度下降。 我们在基准数据集和最新神经网络模型方面的实证结果显示了拟议方法在选择对GNNS进行后门攻击的注射姿势方面的有效性。 例如,在节点分类任务中, 由GIGLME选择的后门攻击姿势攻击以触发注射姿势, 超过84美元的攻击成功率, 低于2.5美元的精确度下降。