The inception of the Relational Graph Convolutional Network (R-GCN) marked a milestone in the Semantic Web domain as a widely cited method that generalises end-to-end hierarchical representation learning to Knowledge Graphs (KGs). R-GCNs generate representations for nodes of interest by repeatedly aggregating parameterised, relation-specific transformations of their neighbours. However, in this paper, we argue that the the R-GCN's main contribution lies in this "message passing" paradigm, rather than the learned weights. To this end, we introduce the "Random Relational Graph Convolutional Network" (RR-GCN), which leaves all parameters untrained and thus constructs node embeddings by aggregating randomly transformed random representations from neighbours, i.e., with no learned parameters. We empirically show that RR-GCNs can compete with fully trained R-GCNs in both node classification and link prediction settings.
翻译:关系图变迁网络(R-GCN)的诞生标志着语义网络域中的一个里程碑,这是一个广泛引用的方法,它把端到端的层次代表学习概括为知识图(KGs),R-GCN通过反复整合其邻居的参数化和特定关系变迁,产生了引起兴趣的节点代表。然而,在本文中,我们认为,R-GCN的主要贡献在于这种“信息传递”范式,而不是学到的重量。为此,我们引入了“兰多-端关系图变迁网络”(RR-GCN),这使所有参数都未经过训练,从而通过将邻居随机变化的随机变迁出(即没有学到的参数)集合来构建节点嵌嵌。我们从经验上表明,RR-GCN可以在节点分类和连接预测设置中与经过充分训练的R-GCN进行竞争。