Unsupervised graph representation learning is a non-trivial topic for graph data. The success of contrastive learning and self-supervised learning in the unsupervised representation learning of structured data inspires similar attempts on the graph. The current unsupervised graph representation learning and pre-training using the contrastive loss are mainly based on the contrast between handcrafted augmented graph data. However, the graph data augmentation is still not well-explored due to the unpredictable invariance. In this paper, we propose a novel collaborative graph neural networks contrastive learning framework (CGCL), which uses multiple graph encoders to observe the graph. Features observed from different views act as the graph augmentation for contrastive learning between graph encoders, avoiding any perturbation to guarantee the invariance. CGCL is capable of handling both graph-level and node-level representation learning. Extensive experiments demonstrate the advantages of CGCL in unsupervised graph representation learning and the non-necessity of handcrafted data augmentation composition for graph representation learning.
翻译:未经监督的图形显示学习是图形数据的一个非三重主题。 在未经监督的结构性数据显示学习中,对比性学习和自我监督学习的成功激发了图形上的类似尝试。目前使用对比性损失的未经监督的图形显示学习和预培训主要基于手动制作的扩大图形数据之间的对比。然而,由于不可预测的变化,图形数据扩增仍未很好地探索。在本文中,我们提出一个新的合作性图形神经网络对比学习框架(CGCL),它使用多个图形编码来观察图形。不同观点中观察到的特征作为图表编码之间对比性学习的图形放大功能,避免任何扰动性以保证变化。CGCL能够处理图形水平和无水平代表学习。广泛的实验表明CGCL在未经监督的图形表示学习中的优势,以及手动数据扩增成结构对于图形代表学习的非必要性。