Attempting to fully exploit the rich information of topological structure and node features for attributed graph, we introduce self-supervised learning mechanism to graph representation learning and propose a novel Self-supervised Consensus Representation Learning (SCRL) framework. In contrast to most existing works that only explore one graph, our proposed SCRL method treats graph from two perspectives: topology graph and feature graph. We argue that their embeddings should share some common information, which could serve as a supervisory signal. Specifically, we construct the feature graph of node features via k-nearest neighbor algorithm. Then graph convolutional network (GCN) encoders extract features from two graphs respectively. Self-supervised loss is designed to maximize the agreement of the embeddings of the same node in the topology graph and the feature graph. Extensive experiments on real citation networks and social networks demonstrate the superiority of our proposed SCRL over the state-of-the-art methods on semi-supervised node classification task. Meanwhile, compared with its main competitors, SCRL is rather efficient.
翻译:为了充分利用地形结构和节点特性的丰富信息以图示代表性学习,并提议一个新的自我监督共识代表学习框架。与大多数只探索一个图表的现有工作相比,我们提议的SCRL方法从两个角度处理图表:地形图和特征图。我们争辩说,它们的嵌入应共享一些共同信息,这可以作为监督信号。具体地说,我们通过 k- 近邻算法构建节点特征特征的特征图。然后,图形共变网络(GCN) 编码器分别从两个图表中提取特征。自我监督损失的目的是尽量扩大在地形图和特征图中嵌入同一个节点的一致意见。关于真实引用网络和社会网络的广泛实验表明,我们提议的SCRL优于半超超前节点分类任务中的最新方法。同时,SCRL与其主要竞争者相比,SCRL相当高效。