Graph representation learning plays a vital role in processing graph-structured data. However, prior arts on graph representation learning heavily rely on labeling information. To overcome this problem, inspired by the recent success of graph contrastive learning and Siamese networks in visual representation learning, we propose a novel self-supervised approach in this paper to learn node representations by enhancing Siamese self-distillation with multi-scale contrastive learning. Specifically, we first generate two augmented views from the input graph based on local and global perspectives. Then, we employ two objectives called cross-view and cross-network contrastiveness to maximize the agreement between node representations across different views and networks. To demonstrate the effectiveness of our approach, we perform empirical experiments on five real-world datasets. Our method not only achieves new state-of-the-art results but also surpasses some semi-supervised counterparts by large margins. Code is made available at https://github.com/GRAND-Lab/MERIT
翻译:图表显示学习在处理图表结构化数据方面发挥着关键作用。 然而, 先前的图表显示艺术在很大程度上依靠标签信息学习。 为了克服这一问题, 在图像对比学习和暹罗网络在视觉表现学习方面最近的成功启发下, 我们建议本文件采取新的自我监督方法, 学习节点表达方式, 通过多尺度对比学习加强暹罗自我蒸馏。 具体地说, 我们首先从基于本地和全球视角的输入图表中产生两个强化的观点。 然后, 我们使用两个目标, 称为交叉视图和跨网络对比性, 以最大限度地实现不同观点和网络的节点表达方式之间的一致。 为了展示我们的方法的有效性, 我们在五个真实世界数据集上进行了经验实验。 我们的方法不仅取得了新的最新结果, 而且还超越了大边上的半监督对应方。 代码可在 https://github. com/ GRAND-Lab/MERIT 上查阅 。