Current state-of-the-art self-supervised learning methods for graph neural networks (GNNs) are based on contrastive learning. As such, they heavily depend on the construction of augmentations and negative examples. For example, on the standard PPI benchmark, increasing the number of negative pairs improves performance, thereby requiring computation and memory cost quadratic in the number of nodes to achieve peak performance. Inspired by BYOL, a recently introduced method for self-supervised learning that does not require negative pairs, we present Bootstrapped Graph Latents, BGRL, a self-supervised graph representation method that gets rid of this potentially quadratic bottleneck. BGRL outperforms or matches the previous unsupervised state-of-the-art results on several established benchmark datasets. Moreover, it enables the effective usage of graph attentional (GAT) encoders, allowing us to further improve the state of the art. In particular on the PPI dataset, using GAT as an encoder we achieve state-of-the-art 70.49% Micro-F1, using the linear evaluation protocol. On all other datasets under consideration, our model is competitive with the equivalent supervised GNN results, often exceeding them.
翻译:目前,图形神经网络(GNNs)的最先进的自我监督学习方法以对比性学习为基础。 因此,它们在很大程度上依赖于增压和负面实例的构建。 例如,根据标准 PPI 基准,增加负对数可以提高性能,从而要求计算和记忆成本在节点数量上四倍以达到顶峰性能。 BYOL是最近引入的自我监督学习方法,不需要负对,我们介绍的是Butstrapped 图形中继器,BGRL, 一种自行监督的图形显示方法,可以摆脱这个潜在的四边形瓶。 BGRL 超越或匹配先前在几个既定基准数据集上未经监督的状态结果。此外,它使得能够有效地使用图形专注(GAT)编码,使我们能够进一步改进艺术状况。 特别是在PPPI数据集上,利用GAT作为编码,我们在模型下实现状态,70.49%的微调代表制结果,经常使用具有竞争力的GNF1号等价数据考核。