Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations. GCL can generate graph-level embeddings by maximizing the Mutual Information (MI) between different augmented views of the same graph (positive pairs). However, the GCL is limited by dimensional collapse, i.e., embedding vectors only occupy a low-dimensional subspace. In this paper, we show that the smoothing effect of the graph pooling and the implicit regularization of the graph convolution are two causes of the dimensional collapse in GCL. To mitigate the above issue, we propose a non-maximum removal graph contrastive learning approach (nmrGCL), which removes "prominent'' dimensions (i.e., contribute most in similarity measurement) for positive pair in the pre-text task. Comprehensive experiments on various benchmark datasets are conducted to demonstrate the effectiveness of nmrGCL, and the results show that our model outperforms the state-of-the-art methods. Source code will be made publicly available.
翻译:图表对比性学习(GCL)显示,没有手动说明的监管,图形代表学习(GRL)表现良好。 GCL可以通过在同一图的不同增强的视图(正对)之间最大限度地扩大的相互信息(MI),生成图形层嵌入。 然而,GCL受到维度崩溃的限制,即嵌入矢量只占据一个低维子空间。 在本文中,我们表明,图集的平滑效果和图集的隐性调整是GCL中尺寸崩溃的两大原因。 为了缓解上述问题,我们建议采用非最大去除图解图对比性学习方法(nmrGCL),该方法将去除“正对正对的显著尺寸(即类似度测量中贡献最大 ) ” 。 对各种基准数据集进行了全面实验,以证明 nmrGCL 的有效性,结果显示,我们的模型超越了标准方法。源代码将公开提供。