The most popular design paradigm for Graph Neural Networks (GNNs) is 1-hop message passing -- aggregating information from 1-hop neighbors repeatedly. However, the expressive power of 1-hop message passing is bounded by the Weisfeiler-Lehman (1-WL) test. Recently, researchers extended 1-hop message passing to K-hop message passing by aggregating information from K-hop neighbors of nodes simultaneously. However, there is no work on analyzing the expressive power of K-hop message passing. In this work, we theoretically characterize the expressive power of K-hop message passing. Specifically, we first formally differentiate two different kernels of K-hop message passing which are often misused in previous works. We then characterize the expressive power of K-hop message passing by showing that it is more powerful than 1-WL and can distinguish almost all regular graphs. Despite the higher expressive power, we show that K-hop message passing still cannot distinguish some simple regular graphs and its expressive power is bounded by 3-WL. To further enhance its expressive power, we introduce a KP-GNN framework, which improves K-hop message passing by leveraging the peripheral subgraph information in each hop. We show that KP-GNN can distinguish many distance regular graphs which could not be distinguished by previous distance encoding or 3-WL methods. Experimental results verify the expressive power and effectiveness of KP-GNN. KP-GNN achieves competitive results across all benchmark datasets.
翻译:图形神经网络(GNNS)最受欢迎的设计范式是1点信息传递 -- -- 从1点邻居多次收集信息。然而,1点信息传递的表达力受 Weisfeiler-Lehman (1-WL) 测试的约束。最近,研究人员将1点信息传递到K-hop信息传递到K-hop信息,同时将K-hop 邻居的信息传递到K-hop网络(GNNS) 。然而,没有分析K-hop信息传递的表达力的工作。在这项工作中,我们理论上描述K-hop信息传递的表达力。具体地说,我们首先正式区分K-hop信息传递的两种不同核心。 K-hop信息传递的表达力在以往工作中经常被滥用。我们随后将K-GNPR信息传递的表达力表现为1点以上WL,并且可以区分所有普通图表。尽管有更高的表达力,但我们仍然无法区分一些简单的普通图表和3点的表达力。我们引入了一个清晰的IMNF-G远程图像框架,这样就能通过SDVDBSeral 。