Despite the success of Graph Neural Networks (GNNs) on various applications, GNNs encounter significant performance degradation when the amount of supervision signals, i.e., number of labeled nodes, is limited, which is expected as GNNs are trained solely based on the supervision obtained from the labeled nodes. On the other hand,recent self-supervised learning paradigm aims to train GNNs by solving pretext tasks that do not require any labeled nodes, and it has shown to even outperform GNNs trained with few labeled nodes. However, a major drawback of self-supervised methods is that they fall short of learning class discriminative node representations since no labeled information is utilized during training. To this end, we propose a novel semi-supervised method for graphs, GraFN, that leverages few labeled nodes to ensure nodes that belong to the same class to be grouped together, thereby achieving the best of both worlds of semi-supervised and self-supervised methods. Specifically, GraFN randomly samples support nodes from labeled nodes and anchor nodes from the entire graph. Then, it minimizes the difference between two predicted class distributions that are non-parametrically assigned by anchor-supports similarity from two differently augmented graphs. We experimentally show that GraFN surpasses both the semi-supervised and self-supervised methods in terms of node classification on real-world graphs. The source code for GraFN is available at https://github.com/Junseok0207/GraFN.
翻译:尽管图形神经网络(GNNs)在各种应用方面取得了成功,但GNNs在监督信号数量(即标签节点的数量)有限的情况下遇到了显著的性能退化,因为监管信号数量(即标签节点的数量)有限,因为仅根据标签节点的监管对GNS进行了培训。另一方面,自我监督的学习范式旨在通过解决不需要任何标签节点的借口任务来培训GNNs,这显示,经过很少标签节点的训练的GNNs甚至比GNS还差得多。然而,自我监督方法的一大缺陷是,由于培训期间没有使用标签节点的信息,因此他们没有达到学习类级的歧视性节点。为此,我们提出了一个新的半监督的图表方法,Gream FNF, 利用很少的标签节点来确保属于同一类的节点能够被组合在一起,从而在半监督和自我监督的图表中达到最佳的GRNFOs 。具体地说,在标签节点/Orald 源中随机的节点支持了真正的节点,在标签节点的节点上没有固定的节点,而没有固定地标,我们所分配的轨道上的双级的正级的基点的基点的基点的基点的基点的基点的基点的基点分布分布分布是整个图的基点。