This paper studies learning node representations with graph neural networks (GNNs) for unsupervised scenario. Specifically, we derive a theoretical analysis and provide an empirical demonstration about the non-steady performance of GNNs over different graph datasets, when the supervision signals are not appropriately defined. The performance of GNNs depends on both the node feature smoothness and the locality of graph structure. To smooth the discrepancy of node proximity measured by graph topology and node feature, we proposed SAIL - a novel \underline{S}elf-\underline{A}ugmented graph contrast\underline{i}ve \underline{L}earning framework, with two complementary self-distilling regularization modules, \emph{i.e.}, intra- and inter-graph knowledge distillation. We demonstrate the competitive performance of SAIL on a variety of graph applications. Even with a single GNN layer, SAIL has consistently competitive or even better performance on various benchmark datasets, comparing with state-of-the-art baselines.
翻译:本文用图形神经网络( GNNs) 学习图形神经网络( GNNs) 的节点表达方式, 以用于不受监督的情景。 具体地说, 我们从理论上进行分析, 并实验性地展示 GNNs 在不同图形数据集中的非稳定性表现, 当监管信号没有适当定义时。 GNNs 的表现既取决于节点特征的平稳性能, 也取决于图形结构的位置。 为了平滑用图形表层和节点特征测量的节点相近性差异, 我们建议 SAIL - 一个全新的下线 elf- sunderline{ elf- line{A} 放大式的图形对比线 { i} 底线{ { L} 学习框架, 有两个互补的自我蒸馏常规化模块, 即\emph{ e. } 内部和间知识蒸馏模块。 我们展示 SAIL 在各种图形应用程序上的竞争性性表现。 即使在单一的 GNNNND 层, SAIL 在各种基准数据集上, 也始终具有竞争力或更好的业绩, 。 与最新基准基准基线比较。