Recent studies demonstrate that Graph Neural Networks (GNNs) are vulnerable to slight but adversarially designed perturbations, known as adversarial examples. To address this issue, robust training methods against adversarial examples have received considerable attention in the literature. \emph{Adversarial Training (AT)} is a successful approach to learning a robust model using adversarially perturbed training samples. Existing AT methods on GNNs typically construct adversarial perturbations in terms of graph structures or node features. However, they are less effective and fraught with challenges on graph data due to the discreteness of graph structure and the relationships between connected examples. In this work, we seek to address these challenges and propose Spectral Adversarial Training (SAT), a simple yet effective adversarial training approach for GNNs. SAT first adopts a low-rank approximation of the graph structure based on spectral decomposition, and then constructs adversarial perturbations in the spectral domain rather than directly manipulating the original graph structure. To investigate its effectiveness, we employ SAT on three widely used GNNs. Experimental results on four public graph datasets demonstrate that SAT significantly improves the robustness of GNNs against adversarial attacks without sacrificing classification accuracy and training efficiency.
翻译:最近的研究显示,图形神经网络(GNNS)容易受到轻微但对抗性设计的扰动干扰,称为对抗性实例。为解决这一问题,针对对抗性实例的有力培训方法在文献中受到相当重视。 \emph{Adversarial traination(AT)}是利用对抗性扰动训练样本学习强健模型的成功方法。关于GNNS的现有AT方法通常在图形结构或节点特征方面建立对抗性扰动。然而,由于图形结构的离散和关联实例之间的关系,这些方法在图形数据上不那么有效,而且充满挑战。在这项工作中,我们力求应对这些挑战,并提议为GNNS提供简单而有效的对抗性培训。 SAT首先采用基于光谱分解定位的平面结构低调近光度,然后在光谱域内建立对抗性扰动性扰动,而不是直接操纵原始图表结构。为了调查其有效性,我们利用SAT系统在三种广泛使用的GNNNS对G数据进行精确性研究,实验性地显示GNNSAT在四种图表上进行强度的对准性攻击方面,实验性试验性地展示了GNNSAT对准性数据效率。