Knowledge graphs, modeling multi-relational data, improve numerous applications such as question answering or graph logical reasoning. Many graph neural networks for such data emerged recently, often outperforming shallow architectures. However, the design of such multi-relational graph neural networks is ad-hoc, driven mainly by intuition and empirical insights. Up to now, their expressivity, their relation to each other, and their (practical) learning performance is poorly understood. Here, we initiate the study of deriving a more principled understanding of multi-relational graph neural networks. Namely, we investigate the limitations in the expressive power of the well-known Relational GCN and Compositional GCN architectures and shed some light on their practical learning performance. By aligning both architectures with a suitable version of the Weisfeiler-Leman test, we establish under which conditions both models have the same expressive power in distinguishing non-isomorphic (multi-relational) graphs or vertices with different structural roles. Further, by leveraging recent progress in designing expressive graph neural networks, we introduce the $k$-RN architecture that provably overcomes the expressiveness limitations of the above two architectures. Empirically, we confirm our theoretical findings in a vertex classification setting over small and large multi-relational graphs.
翻译:模拟多关系数据的知识图形,建模多关系数据,改进了许多应用程序,例如问题解答或图表逻辑推理等。许多用于这些数据的图形神经网络最近才出现,往往优于浅质结构。然而,这种多关系图形神经网络的设计是临时性的,主要由直觉和经验洞察力驱动。到目前为止,对这两种结构的表达性、彼此的关系和(实际的)学习性能了解甚少。在这里,我们开始研究如何对多关系图形神经网络产生更有原则的理解。也就是说,我们研究众所周知的GCN和GCN构成结构的显性力量的局限性,并介绍它们的实际学习性能。通过将这两种结构与Weisfefeler-Leman测试的合适版本相匹配,我们建立了两种模式在区分非形态(多关系)图表或螺旋关系和不同结构作用方面具有相同的显性能。此外,我们利用了设计直观的GCN和构件GCN结构网络的最新进展,我们介绍了其实际学习性表现。我们通过将AGL-R结果与Erverimalimalalimalalalal imalal imal imal imation 建筑的大型结构结构的大小定为超越了我们的图层。