Existing graph contrastive learning methods rely on augmentation techniques based on random perturbations (e.g., randomly adding or dropping edges and nodes). Nevertheless, altering certain edges or nodes can unexpectedly change the graph characteristics, and choosing the optimal perturbing ratio for each dataset requires onerous manual tuning. In this paper, we introduce Implicit Graph Contrastive Learning (iGCL), which utilizes augmentations in the latent space learned from a Variational Graph Auto-Encoder by reconstructing graph topological structure. Importantly, instead of explicitly sampling augmentations from latent distributions, we further propose an upper bound for the expected contrastive loss to improve the efficiency of our learning algorithm. Thus, graph semantics can be preserved within the augmentations in an intelligent way without arbitrary manual design or prior human knowledge. Experimental results on both graph-level and node-level tasks show that the proposed method achieves state-of-the-art performance compared to other benchmarks, where ablation studies in the end demonstrate the effectiveness of modules in iGCL.
翻译:现有的图形对比学习方法依靠随机扰动的增强技术(例如随机添加或下降边缘和节点)。然而,改变某些边缘或节点可能会意外地改变图形特性,而选择每个数据集的最佳扰动比则需要人工调整。在本文中,我们引入了隐性图形对比学习(iGCL),它利用通过重建图示表层结构而从变形图形自动放大的潜层空间中学习的增强。重要的是,我们不明确抽样潜在分布的增强,而是进一步提出预期的对比损失的上限,以提高我们学习算法的效率。因此,图形语义学可以在扩增过程中以智能方式保存,而没有武断的手动设计或人类先前的知识。图形层次和节点层次任务的实验结果显示,与其它基准相比,拟议方法达到了状态的艺术性能,而最终的模拟研究则展示了iGCL模块的有效性。