Graph Attention Networks(GATs) are useful deep learning models to deal with the graph data. However, recent works show that the classical GAT is vulnerable to adversarial attacks. It degrades dramatically with slight perturbations. Therefore, how to enhance the robustness of GAT is a critical problem. Robust GAT(RoGAT) is proposed in this paper to improve the robustness of GAT based on the revision of the attention mechanism. Different from the original GAT, which uses the attention mechanism for different edges but is still sensitive to the perturbation, RoGAT adds an extra dynamic attention score progressively and improves the robustness. Firstly, RoGAT revises the edges weight based on the smoothness assumption which is quite common for ordinary graphs. Secondly, RoGAT further revises the features to suppress features' noise. Then, an extra attention score is generated by the dynamic edge's weight and can be used to reduce the impact of adversarial attacks. Different experiments against targeted and untargeted attacks on citation data on citation data demonstrate that RoGAT outperforms most of the recent defensive methods.
翻译:图表关注网络(GATs)是处理图表数据的有用的深层次学习模型。然而,最近的工作显示古典GAT(GAT)很容易受到对抗性攻击。它随着轻微的扰动而急剧退化。因此,如何提高GAT的稳健性是一个关键问题。本文件提议通过修改关注机制来提高GAT(RoGAT)的稳健性。与最初的GAT(对不同边缘使用关注机制,但仍对扰动敏感)不同,RoGAT(RoGAT)增加了额外的动态关注分数,并改进了强度。首先,RoGAT(RoGAT)根据普通图形非常常见的光滑假设对边缘重量进行了修改。第二,RoGAT(RoGAT)进一步修正了抑制特征的特征。随后,动态边缘的重量产生了额外的关注分数,并可用于减少对抗性攻击的影响。对引用数据的定向和非定向攻击的不同实验显示,RoGAT(T)的强度超过了最近大多数防御性方法。