We present the Topology Transformation Equivariant Representation learning, a general paradigm of self-supervised learning for node representations of graph data to enable the wide applicability of Graph Convolutional Neural Networks (GCNNs). We formalize the proposed model from an information-theoretic perspective, by maximizing the mutual information between topology transformations and node representations before and after the transformations. We derive that maximizing such mutual information can be relaxed to minimizing the cross entropy between the applied topology transformation and its estimation from node representations. In particular, we seek to sample a subset of node pairs from the original graph and flip the edge connectivity between each pair to transform the graph topology. Then, we self-train a representation encoder to learn node representations by reconstructing the topology transformations from the feature representations of the original and transformed graphs. In experiments, we apply the proposed model to the downstream node and graph classification tasks, and results show that the proposed method outperforms the state-of-the-art unsupervised approaches.
翻译:我们展示了地形变换等同代表制学习,这是自监督地学习图形数据节点表示法的一般范例,以便图集神经神经网络(GCNNS)能够广泛应用。我们从信息理论角度,通过在变换前后尽可能扩大地形变换和节点表示法之间的相互信息,将拟议的模型正式化。我们从中推断出,最大限度地扩大这种相互信息可以减少应用的表层变换与其从节点表示法估计之间的交叉进化。特别是,我们寻求从原始图表中抽取一组节点配对,并翻转每对对对间边缘连接,以改变图形表层学。然后,我们自我培养一个代表制解调器,从原图和变形图的特征表示法中重建地形变形。在实验中,我们将提议的模型应用于下游节点和图表分类任务,结果显示,拟议的方法超越了最先进的、不受监督的方法。