Graph Convolutional Networks (GCNs) are widely used in many applications yet still need large amounts of labelled data for training. Besides, the adjacency matrix of GCNs is stable, which makes the data processing strategy cannot efficiently adjust the quantity of training data from the built graph structures.To further improve the performance and the self-learning ability of GCNs,in this paper, we propose an efficient self-supervised learning strategy of GCNs,named randomly removed links with a fixed step at one region (RRLFSOR).RRLFSOR can be regarded as a new data augmenter to improve over-smoothing.RRLFSOR is examined on two efficient and representative GCN models with three public citation network datasets-Cora,PubMed,and Citeseer.Experiments on transductive link prediction tasks show that our strategy outperforms the baseline models consistently by up to 21.34% in terms of accuracy on three benchmark datasets.
翻译:此外,GCN的相邻性矩阵是稳定的,这使得数据处理战略无法有效地调整建筑图形结构中的培训数据数量。 为了进一步提高GCN的性能和自学能力,我们在本文件中提出了GCN的高效自我监督学习战略,以一个区域(RRLFSOR)的固定步骤随机删除链接命名。 RRLFSOR可被视为一种新的数据增强器,用以改进超移动。 RRLFSOR以两种高效和有代表性的GCN模型为基础,使用三种公共引用网络数据集-Cora、PubMed和Citeseer。关于传输链接预测任务的研究显示,从三个基准数据集的准确性来看,我们的战略比基准模型一致高出21.34%。