Graph neural network (GNN) is achieving remarkable performances in a variety of application domains. However, GNN is vulnerable to noise and adversarial attacks in input data. Making GNN robust against noises and adversarial attacks is an important problem. The existing defense methods for GNNs are computationally demanding and are not scalable. In this paper, we propose a generic framework for robustifying GNN known as Weighted Laplacian GNN (RWL-GNN). The method combines Weighted Graph Laplacian learning with the GNN implementation. The proposed method benefits from the positive semi-definiteness property of Laplacian matrix, feature smoothness, and latent features via formulating a unified optimization framework, which ensures the adversarial/noisy edges are discarded and connections in the graph are appropriately weighted. For demonstration, the experiments are conducted with Graph convolutional neural network(GCNN) architecture, however, the proposed framework is easily amenable to any existing GNN architecture. The simulation results with benchmark dataset establish the efficacy of the proposed method, both in accuracy and computational efficiency. Code can be accessed at https://github.com/Bharat-Runwal/RWL-GNN.
翻译:然而,GNN在输入数据中很容易受到噪音和对抗性攻击。让GNN在噪音和对抗性攻击方面变得强大起来,这是一个重要问题。GNN现有的国防方法在计算上要求很高,而且无法伸缩。在本文件中,我们提议了一个为GNN(称为Weighted Laplacian GNN(WERL-GNN))进行强化的通用框架。该方法将Weighted Graplace Laplacian学习与GNN的实施工作结合起来。拟议的方法得益于Laplaceian矩阵的正半确定性属性、特征平稳性能和潜在特征,通过制定一个统一的优化框架,确保对抗性/噪音边缘被抛弃,图形中的连接被适当加权。为示范,实验是用图动神经网络(GCNNN)架构进行的,但拟议的框架很容易适应任何现有的GNNN结构。使用基准数据集的模拟结果确定了拟议方法的效能,既准确性又计算效率。可以在 https/NWR/NW/G/NW/NW/NW.CRUB/NP/NP/NPAR/NPART.C.C。