Most graph neural network models rely on a particular message passing paradigm, where the idea is to iteratively propagate node representations of a graph to each node in the direct neighborhood. While very prominent, this paradigm leads to information propagation bottlenecks, as information is repeatedly compressed at intermediary node representations, which causes loss of information, making it practically impossible to gather meaningful signals from distant nodes. To address this, we propose shortest path message passing neural networks, where the node representations of a graph are propagated to each node in the shortest path neighborhoods. In this setting, nodes can directly communicate between each other even if they are not neighbors, breaking the information bottleneck and hence leading to more adequately learned representations. Our framework generalizes message passing neural networks, resulting in a class of more expressive models, including some recent state-of-the-art models. We verify the capacity of a basic model of this framework on dedicated synthetic experiments, and on real-world graph classification and regression benchmarks, and obtain state-of-the art results.
翻译:大多数图形神经网络模型都依赖于特定的信息传递模式, 其理念是将图表的节点向直邻的每个节点反复传播。 虽然这个模式非常突出, 但导致信息传播瓶颈, 因为信息在中间节点代表处反复压缩, 导致信息丢失, 从而实际上无法从遥远的节点中收集有意义的信号。 为了解决这个问题, 我们提出通过神经网络的最短路径信息, 将图表的节点表达方式传播到最短路段的每个节点。 在此背景下, 节点可以直接相互沟通, 即使它们不是邻居, 打破信息瓶颈, 从而导致更充分学习的演示。 我们的框架将信息传递神经网络, 导致形成一系列更直观的模型, 包括一些最新的最新状态模型。 我们核查这个框架的基本模型在专门合成实验、 真实世界的图形分类和回归基准上的能力, 并获得最新艺术成果 。