The self-supervised learning (SSL) paradigm is an essential exploration area, which tries to eliminate the need for expensive data labeling. Despite the great success of SSL methods in computer vision and natural language processing, most of them employ contrastive learning objectives that require negative samples, which are hard to define. This becomes even more challenging in the case of graphs and is a bottleneck for achieving robust representations. To overcome such limitations, we propose a framework for self-supervised graph representation learning - Graph Barlow Twins, which utilizes a cross-correlation-based loss function instead of negative samples. Moreover, it does not rely on non-symmetric neural network architectures - in contrast to state-of-the-art self-supervised graph representation learning method BGRL. We show that our method achieves as competitive results as the best self-supervised methods and fully supervised ones while requiring fewer hyperparameters and substantially shorter computation time (ca. 30 times faster than BGRL).
翻译:自我监督的学习模式(SSL)是一个基本的勘探领域,它试图消除昂贵的数据标签需求。尽管SSL方法在计算机视觉和自然语言处理方面取得了巨大成功,但大多数都采用了要求否定样本的对比式学习目标,很难界定。这在图表方面变得更加困难,成为实现强力代表的瓶颈。为了克服这些限制,我们提出了一个自我监督的图形代表学习框架-图巴洛双,它使用跨层关系损失功能,而不是负面样本。此外,它并不依赖非对称神经网络结构----与最先进的自我监督的图形代表学习方法BGRL形成对照。我们表明,我们的方法作为最佳自我监督方法和完全监督的方法,取得了竞争性的结果,同时需要较少的超参数和大大缩短的计算时间(比BGRL快30倍 )。