Contrastive learning methods based on InfoNCE loss are popular in node representation learning tasks on graph-structured data. However, its reliance on data augmentation and its quadratic computational complexity might lead to inconsistency and inefficiency problems. To mitigate these limitations, in this paper, we introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL in short). Local-GCL consists of two key designs: 1) We fabricate the positive examples for each node directly using its first-order neighbors, which frees our method from the reliance on carefully-designed graph augmentations; 2) To improve the efficiency of contrastive learning on graphs, we devise a kernelized contrastive loss, which could be approximately computed in linear time and space complexity with respect to the graph size. We provide theoretical analysis to justify the effectiveness and rationality of the proposed methods. Experiments on various datasets with different scales and properties demonstrate that in spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
翻译:基于InfoNCE损失的反向学习方法在图表结构化数据的节点教学任务中很受欢迎。然而,对数据增强及其二次计算复杂性的依赖可能会导致不一致和效率低下的问题。为了减轻这些限制,我们在本文件中引入了一个简单而有效的对比模型,名为本地化图表对比学习(简称Loblic-GCL),由两大关键设计组成:(1) 我们直接用其第一阶邻居为每个节点构建积极的范例,这使我们的方法摆脱了对精心设计的图表增强功能的依赖;(2) 为了提高图表对比性学习的效率,我们设计了一种以直线时间和空间的复杂度粗略计算与图形大小的对比性损失。我们提供理论分析,以证明拟议方法的有效性和合理性。 不同尺度和属性的各种数据集实验表明,尽管简单,本地化组合在以不同尺度和属性的图形上,在自我超超级节点教学任务上取得了相当有竞争力的业绩。