The most popular design paradigm for Graph Neural Networks (GNNs) is 1-hop message passing -- aggregating features from 1-hop neighbors repeatedly. However, the expressive power of 1-hop message passing is bounded by the Weisfeiler-Lehman (1-WL) test. Recently, researchers extended 1-hop message passing to K-hop message passing by aggregating information from K-hop neighbors of nodes simultaneously. However, there is no work on analyzing the expressive power of K-hop message passing. In this work, we theoretically characterize the expressive power of K-hop message passing. Specifically, we first formally differentiate two kinds of kernels of K-hop message passing which are often misused in previous works. We then characterize the expressive power of K-hop message passing by showing that it is more powerful than 1-hop message passing. Despite the higher expressive power, we show that K-hop message passing still cannot distinguish some simple regular graphs. To further enhance its expressive power, we introduce a KP-GNN framework, which improves K-hop message passing by leveraging the peripheral subgraph information in each hop. We prove that KP-GNN can distinguish almost all regular graphs including some distance regular graphs which could not be distinguished by previous distance encoding methods. Experimental results verify the expressive power and effectiveness of KP-GNN. KP-GNN achieves competitive results across all benchmark datasets.
翻译:图形神经网络(GNNS)最受欢迎的设计范式是1点信息传递,即来自1点邻居的1点信息传递 -- -- 来自1点邻居的集合功能。然而,1点信息传递的表达力受 Weisfeiler-Lehman (1-WL) 测试的束缚。最近,研究人员将1点信息传递到K-hop信息传递到K-hop信息,同时汇集来自节点K-hop邻居的信息。然而,在分析K-hop信息传递的表达力方面没有做任何工作。在这项工作中,我们理论上描述K-hop信息传递的表达力。具体地说,我们首先正式区分了K-hop信息传递的两种表达力。在以往工作中经常被滥用的K-hop信息传递的表达力。我们随后将K-hop信息传递到K-hop信息传递的表达力特征表现为比1点传递信息传递的信息更加强大。尽管有更高的表达力,但我们显示K-hop传递的信息仍然无法区分一些简单的常规图表。为了加强它的表达力,我们引入了KP-GNNNNNNNP框架框架,它通过利用边端端端端端端端端点信息传递的某些子信息可以改进K-h信息传递,几乎都能验证。我们通过普通的常规的平基局的平基数式的校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校校对结果。