Self-supervised learning methods became a popular approach for graph representation learning because they do not rely on manual labels and offer better generalization. Contrastive methods based on mutual information maximization between augmented instances of the same object are widely used in self-supervised learning of representations. For graph-structured data, however, there are two obstacles to successfully utilizing these methods: the data augmentation strategy and training decoder for mutual information estimation between augmented representations of nodes, sub-graphs, or graphs. In this work, we propose a self-supervised graph representation learning algorithm, Graph Information Representation Learning (GIRL). GIRL does not require augmentations or a decoder for mutual information estimation. The algorithm is based on an alternative information metric, \textit{recoverability}, which is tightly related to mutual information but is less complicated when estimating. Our self-supervised algorithm consistently outperforms existing state-of-the-art contrast-based self-supervised methods by a large margin on a variety of datasets. In addition, we show how the recoverability can be used in a supervised setting to alleviate the effect of over-smoothing/squashing in deeper graph neural networks. The code to reproduce our experiments is available at https://github.com/Anonymous1252022/Recoverability
翻译:自我监督的学习方法成为图表代表性学习的流行方法,因为它们不依赖手动标签,而是提供更好的概括性。基于相互信息的最大化同一对象的放大情况之间的对比方法被广泛用于自我监督的演示学习。但是,对于图表结构化数据,成功使用这些方法有两个障碍:数据增强战略和培训解码器,用于在节点、子图或图表的扩大表示法之间进行相互信息估计;在这项工作中,我们建议一种自监督的图形代表学习算法,图表信息代表学习法(GIRL)。GIL不需要增强或解码器来进行相互信息估计。算法基于替代信息衡量标准,\textit{可恢复性},它与相互信息密切相关,但在估算时则不那么复杂。我们自我监督的算法始终超越了现有基于对比的状态的自我监督方法,在各种数据集上有很大的幅度。此外,我们展示了如何在有监督的图像/可复制性网络中使用的可恢复性,在更深层次的图像/可复制性中,Annovis recommus to remais the degrational speal speal suration suplipsypsypsypsy.