Graph Contrastive Learning (GCL) recently has drawn much research interest for learning generalizable, transferable, and robust node representations in a self-supervised fashion. In general, the contrastive learning process in GCL is performed on top of the representations learned by a graph neural network (GNN) backbone, which transforms and propagates the node contextual information based on its local neighborhoods. However, existing GCL efforts have severe limitations in terms of both encoding architecture, augmentation, and contrastive objective, making them commonly inefficient and ineffective to use in different datasets. In this work, we go beyond the existing unsupervised GCL counterparts and address their limitations by proposing a simple yet effective framework S$^3$-CL. Specifically, by virtue of the proposed structural and semantic contrastive learning, even a simple neural network is able to learn expressive node representations that preserve valuable structural and semantic patterns. Our experiments demonstrate that the node representations learned by S$^3$-CL achieve superior performance on different downstream tasks compared to the state-of-the-art GCL methods.
翻译:对比图学习(GCL)最近引起了许多研究兴趣,以自我监督的方式学习通用、可转让和稳健的节点表达方式。一般而言,GCL的对比学习过程是在由图形神经网络主干网(GNN)学习的演示形式之上进行的,该主干网改造和传播基于其当地邻居的节点背景信息。然而,现有的GCL努力在编码结构、扩增和对比目标方面都有着严重的局限性,使得它们通常没有效率,无法用于不同的数据集。在这项工作中,我们超越了现有的不受监督的GCL对口单位,通过提出简单而有效的S$3$-CL框架来解决其局限性。具体地说,由于拟议的结构和语义对比学习,即使是简单的神经网络也能学习表达式的节点表达方式,从而保留宝贵的结构和语义模式。我们的实验表明,S$3CL所学的节点表达方式在不同的下游任务上取得了优于最先进的GCL方法的绩效。