The self-supervised learning (SSL) paradigm is an essential exploration area, which tries to eliminate the need for expensive data labeling. Despite the great success of SSL methods in computer vision and natural language processing, most of them employ contrastive learning objectives that require negative samples, which are hard to define. This becomes even more challenging in the case of graphs and is a bottleneck for achieving robust representations. To overcome such limitations, we propose a framework for self-supervised graph representation learning -- Graph Barlow Twins, which utilizes a cross-correlation-based loss function instead of negative samples. Moreover, it does not rely on non-symmetric neural network architectures -- in contrast to state-of-the-art self-supervised graph representation learning method BGRL. We show that our method achieves as competitive results as BGRL, best self-supervised methods, and fully supervised ones while requiring substantially fewer hyperparameters and converging in an order of magnitude training steps earlier.
翻译:自我监督学习模式(SSL)是一个至关重要的勘探领域,它试图消除昂贵数据标签的需要。尽管SSL方法在计算机视觉和自然语言处理方面取得了巨大成功,但大多数采用对比式学习目标,需要否定的样本,很难界定。这在图表方面变得更加困难,成为实现强力代表的瓶颈。为了克服这些限制,我们提出了一个自我监督图形代表学习框架 -- -- 图形巴洛双胞胎,它利用的是交叉关系损失功能,而不是负面样本。此外,它并不依赖非对称神经网络结构 -- -- 与最先进的自我监督的图形代表学习方法BGRL形成对照。我们表明,我们的方法取得了竞争结果,如BGRL,最好的自我监督方法,以及完全监督的方法,同时要求大大减少超参数,在较早的大规模培训步骤中凝聚在一起。