Self-supervised learning is gaining considerable attention as a solution to avoid the requirement of extensive annotations in representation learning on graphs. We introduce \textit{Regularized Graph Infomax (RGI)}, a simple yet effective framework for node level self-supervised learning on graphs that trains a graph neural network encoder by maximizing the mutual information between node level local and global views, in contrast to previous works that employ graph level global views. The method promotes the predictability between views while regularizing the covariance matrices of the representations. Therefore, RGI is non-contrastive, does not depend on complex asymmetric architectures nor training tricks, is augmentation-free and does not rely on a two branch architecture. We run RGI on both transductive and inductive settings with popular graph benchmarks and show that it can achieve state-of-the-art performance regardless of its simplicity.
翻译:自我监督学习作为避免在图表上的演示学习中需要大量说明的解决方案,正在得到相当的重视。 我们引入了 \ textit{ regalized Graphmax (RGI)},这是一个简单而有效的框架,用于节点级自我监督的图形学习,通过尽量扩大节点水平地方与全球观点之间的相互信息,从而在图形水平全球观点上避免大量说明。这种方法促进观点之间的可预测性,同时规范演示的变量矩阵。 因此, RGI 不具有争议性,不依赖于复杂的不对称结构或训练技巧,没有增强功能,也不依赖两个分支结构。 我们运行RGI, 使用流行的图形基准, 显示它能够实现最先进的性能, 不论是否简单。</s>