Graph neural networks (GNNs) have achieved notable success in the semi-supervised learning scenario. The message passing mechanism in graph neural networks helps unlabeled nodes gather supervision signals from their labeled neighbors. In this work, we investigate how consistency regularization, one of widely adopted semi-supervised learning methods, can help improve the performance of graph neural networks. We revisit two methods of consistency regularization for graph neural networks. One is simple consistency regularization (SCR), and the other is mean-teacher consistency regularization (MCR). We combine the consistency regularization methods with two state-of-the-art GNNs and conduct experiments on the ogbn-products dataset. With the consistency regularization, the performance of state-of-the-art GNNs can be improved by 0.3% on the ogbn-products dataset of Open Graph Benchmark (OGB) both with and without external data.
翻译:图形神经网络(GNNs)在半监督的学习情景中取得了显著成功。 图形神经网络中的传递信息机制帮助未贴标签的结点从标签的邻居那里收集了监管信号。 在这项工作中,我们调查了一致性规范化(这是广泛采用的半监督的学习方法之一)如何能够帮助改善图形神经网络的性能。 我们重新审视了图形神经网络的两种一致性规范化方法。 一种是简单的一致性规范化(SCR ),另一种是平均教师一致性规范化(MCR ) 。 我们把一致性规范化方法与两种最先进的GNNs结合起来,并进行了关于人文产品数据集的实验。 随着一致性规范化,在开放图表基准(OGB)的Ogbn 产品数据集上,通过外部数据或无外部数据,可将最新GNNs的性能提高0.3%。