Most graph neural network models rely on a particular message passing paradigm, where the idea is to iteratively propagate node representations of a graph to each node in the direct neighborhood. While very prominent, this paradigm leads to information propagation bottlenecks, as information is repeatedly compressed at intermediary node representations, which causes loss of information, making it practically impossible to gather meaningful signals from distant nodes. To address this issue, we propose shortest path message passing neural networks, where the node representations of a graph are propagated to each node in the shortest path neighborhoods. In this setting, nodes can directly communicate between each other even if they are not neighbors, breaking the information bottleneck and hence leading to more adequately learned representations. Theoretically, our framework generalizes message passing neural networks, resulting in provably more expressive models. Empirically, we verify the capacity of a basic model of this framework on dedicated synthetic experiments, and on real-world graph classification and regression benchmarks, obtaining several state-of-the-art results.
翻译:大多数图形神经网络模型都依赖于特定的信息传递模式, 其理念是向直接邻里的各个节点反复传播图表的节点表示。 虽然这个模式非常突出, 但它导致信息传播瓶颈, 因为信息在中间节点表示时反复压缩, 导致信息丢失, 使得几乎不可能从遥远的节点中收集有意义的信号。 为了解决这一问题, 我们提出通过神经网络的最短路径信息, 将图表的节点表示方式传播到最短路段的每个节点。 在这个环境中, 节点可以直接相互沟通, 即使它们不是邻居, 打破信息瓶颈, 从而导致更充分学习的表达。 从理论上讲, 我们的框架将信息传递神经网络普遍化, 从而导致更显著的表达模式。 我们很生动地核查这个框架的基本模型的能力, 用于专门合成实验, 以及基于真实世界的图形分类和回归基准, 获得一些最新的结果 。