Graph Neural Networks (GNNs) have achieved tremendous success in various real-world applications due to their strong ability in graph representation learning. GNNs explore the graph structure and node features by aggregating and transforming information within node neighborhoods. However, through theoretical and empirical analysis, we reveal that the aggregation process of GNNs tends to destroy node similarity in the original feature space. There are many scenarios where node similarity plays a crucial role. Thus, it has motivated the proposed framework SimP-GCN that can effectively and efficiently preserve node similarity while exploiting graph structure. Specifically, to balance information from graph structure and node features, we propose a feature similarity preserving aggregation which adaptively integrates graph structure and node features. Furthermore, we employ self-supervised learning to explicitly capture the complex feature similarity and dissimilarity relations between nodes. We validate the effectiveness of SimP-GCN on seven benchmark datasets including three assortative and four disassorative graphs. The results demonstrate that SimP-GCN outperforms representative baselines. Further probe shows various advantages of the proposed framework. The implementation of SimP-GCN is available at \url{https://github.com/ChandlerBang/SimP-GCN}.
翻译:GNN通过汇集和转换节点周围的信息,探索图形结构和节点特点,以综合和转换图形结构和节点特点;然而,通过理论和经验分析,我们发现,GNN的汇总过程往往破坏原始特征空间的结点相似性;有许多情况,节点相似性在其中发挥着关键作用;因此,它推动了拟议的SimP-GCN框架,在利用图表结构的同时,能够有效和高效地保存节点相似性。具体来说,为了平衡图形结构和节点特点的信息,我们提出了一种相似性保护集合特征,以适应性地整合图结构和节点特点;此外,我们采用自我监督的学习方法,明确捕捉各节点之间的复杂相似性和差异关系;我们验证了SimP-GCN在七个基准数据集上的有效性,包括三个分解式图和四个分解图。结果显示,SimP-GCN超越了图CN的代表基准基线。进一步测量了拟议框架的各种优点。