We introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs. We show that unlike visual representation learning, increasing the number of views to more than two or contrasting multi-scale encodings do not improve performance, and the best performance is achieved by contrasting encodings from first-order neighbors and a graph diffusion. We achieve new state-of-the-art results in self-supervised learning on 8 out of 8 node and graph classification benchmarks under the linear evaluation protocol. For example, on Cora (node) and Reddit-Binary (graph) classification benchmarks, we achieve 86.8% and 84.5% accuracy, which are 5.5% and 2.4% relative improvements over previous state-of-the-art. When compared to supervised baselines, our approach outperforms them in 4 out of 8 benchmarks. Source code is released at: https://github.com/kavehhassani/mvgrl
翻译:我们采用自我监督的方法学习节点和图形层次的表达方式,通过对比图表的结构观点来进行对比。我们表明,与视觉演示学习不同,将浏览次数增加到超过两个或对比的多尺度编码并不能改善业绩,而最佳业绩是通过对一阶邻居的编码进行对比和图散射来实现。我们在线性评估协议下8个节点中的8个节点和图表分类基准的自我监督学习中取得了新的最先进的成果。例如,在科拉(节点)和Reddit-Binary(绘图)分类基准方面,我们实现了86.8%和84.5%的准确率,比以往的状态提高了5.5%和2.4%的相对改进率。与监督基线相比,我们的方法在8个基准中的4个基准中优于它们。源代码发布于https://github.com/kavehassani/mvgrl:https://githhub.kavehassani/mvgrl。