We study the problem of semi-supervised learning on graphs, for which graph neural networks (GNNs) have been extensively explored. However, most existing GNNs inherently suffer from the limitations of over-smoothing, non-robustness, and weak-generalization when labeled nodes are scarce. In this paper, we propose a simple yet effective framework -- GRAPH RANDOM NEURAL NETWORKS (GRAND) -- to address these issues. In GRAND, we first design a random propagation strategy to perform graph data augmentation. Then we leverage consistency regularization to optimize the prediction consistency of unlabeled nodes across different data augmentations. Extensive experiments on graph benchmark datasets suggest that GRAND significantly outperforms state-of-the-art GNN baselines on semi-supervised node classification. Finally, we show that GRAND mitigates the issues of over-smoothing and non-robustness, exhibiting better generalization behavior than existing GNNs. The source code of GRAND is publicly available at https://github.com/Grand20/grand.
翻译:我们研究了在图表上进行半监督的学习的问题,为此对图形神经网络进行了广泛探讨,然而,大多数现有的GNN在标签节点稀缺时,必然受到过度移动、非沸腾和简单化的限制。在本文中,我们提出了一个简单而有效的框架 -- -- GRAPH RANDOM NEURAL NEURWORKS(GRAND) -- -- 来解决这些问题。在GRAND中,我们首先设计了一个随机传播战略来进行图形数据增强。然后,我们利用一致性规范来优化不同数据增强中未标的节点的预测一致性。关于图形基准数据集的广泛实验表明,GRAND明显地超越了半监控节点分类的GNN的状态基线。最后,我们表明GAND减轻了过度移动和非紫外线问题,展示了比现有的GNN更好的一般化行为。GAND的来源代码在https://github.com/Grand20/Grand上公开提供。