Given augmented views of each input graph, contrastive learning methods (e.g., InfoNCE) optimize pairwise alignment of graph embeddings across views while providing no mechanism to control the global structure of the view specific graph-of-graphs built from these embeddings. We introduce SpecMatch-CL, a novel loss function that aligns the view specific graph-of-graphs by minimizing the difference between their normalized Laplacians. Theoretically, we show that under certain assumptions, the difference between normalized Laplacians provides an upper bound not only for the difference between the ideal Perfect Alignment contrastive loss and the current loss, but also for the Uniformly loss. Empirically, SpecMatch-CL establishes new state of the art on eight TU benchmarks under unsupervised learning and semi-supervised learning at low label rates, and yields consistent gains in transfer learning on PPI-306K and ZINC 2M datasets.
翻译:给定每个输入图的增强视图,对比学习方法(如InfoNCE)通过优化跨视图的图嵌入成对对齐进行学习,但未提供机制来控制由这些嵌入构建的视图特定图之图(graph-of-graphs)的全局结构。本文提出SpecMatch-CL,一种新颖的损失函数,通过最小化其归一化拉普拉斯矩阵之间的差异来对齐视图特定的图之图。理论上,我们证明在一定假设下,归一化拉普拉斯矩阵之间的差异不仅为理想完美对齐对比损失与当前损失之间的差异提供了上界,也为均匀损失提供了上界。实证结果表明,在无监督学习和低标签率半监督学习设置下,SpecMatch-CL在八个TU基准数据集上取得了新的最优性能,并在PPI-306K和ZINC 2M数据集的迁移学习中带来了一致的性能提升。