Most graph neural network models rely on a particular message passing paradigm, where the idea is to iteratively propagate node representations of a graph to each node in the direct neighborhood. While very prominent, this paradigm leads to information propagation bottlenecks, as information is repeatedly compressed at intermediary node representations, which causes loss of information, making it practically impossible to gather meaningful signals from distant nodes. To address this issue, we propose shortest path message passing neural networks, where the node representations of a graph are propagated to each node in the shortest path neighborhoods. In this setting, nodes can directly communicate between each other even if they are not neighbors, breaking the information bottleneck and hence leading to more adequately learned representations. Theoretically, our framework generalizes message passing neural networks, resulting in provably more expressive models, and we show that some recent state-of-the-art models are special instances of this framework. Empirically, we verify the capacity of a basic model of this framework on dedicated synthetic experiments, and on real-world graph classification and regression benchmarks, and obtain state-of-the-art results.
翻译:大多数图形神经网络模型都依赖于特定的信息传递模式, 其理念是将图表的节点向直邻的每个节点反复传播。 虽然这个模式非常突出, 但导致信息传播瓶颈, 因为在中间节点代表处反复压缩信息, 导致信息丢失, 使得几乎不可能从远端节点收集有意义的信号。 为了解决这一问题, 我们提出通过神经网络的最短路径信息, 将图表的节点向最短路径街区的每个节点传播。 在这个背景下, 节点可以直接相互沟通, 即使它们不是邻居, 打破信息瓶颈, 从而导致更充分学习的表达。 从理论上讲, 我们的框架将信息传递神经网络, 导致更清晰的表达模式, 我们表明最近一些最先进的模型是这个框架的特殊例子。 我们生动地核查这个框架的基本模型在专门合成实验上的能力, 以及现实世界的图形分类和回归基准上的能力, 并获得最新的结果 。