Graph Neural Networks (GNNs) have shown great potential in the field of graph representation learning. Standard GNNs define a local message-passing mechanism which propagates information over the whole graph domain by stacking multiple layers. This paradigm suffers from two major limitations, over-squashing and poor long-range dependencies, that can be solved using global attention but significantly increases the computational cost to quadratic complexity. In this work, we propose an alternative approach to overcome these structural limitations by leveraging the ViT/MLP-Mixer architectures introduced in computer vision. We introduce a new class of GNNs, called Graph MLP-Mixer, that holds three key properties. First, they capture long-range dependency and mitigate the issue of over-squashing as demonstrated on the Long Range Graph Benchmark (LRGB) and the TreeNeighbourMatch datasets. Second, they offer better speed and memory efficiency with a complexity linear to the number of nodes and edges, surpassing the related Graph Transformer and expressive GNN models. Third, they show high expressivity in terms of graph isomorphism as they can distinguish at least 3-WL non-isomorphic graphs. We test our architecture on 4 simulated datasets and 7 real-world benchmarks, and show highly competitive results on all of them.
翻译:图像神经网络(GNNs)在图形代表学习领域显示出巨大的潜力。 标准 GNNs 定义了一个本地信息传递机制, 通过堆叠多层, 在整个图形域传播信息。 这一模式存在两大局限性, 即过度拥挤和低长距离依赖性, 这可以通过全球关注来解决, 但却大大增加了四级复杂性的计算成本。 在这项工作中, 我们建议了一种替代方法, 利用在计算机愿景中引入的 VIT/ MLP- Mixer 结构来克服这些结构性限制。 我们引入了一个新的 GNNs 类别, 叫做 Grap MLP- Mixer, 它包含三个关键属性。 首先, 它们捕捉了远程依赖性, 并缓解了在长距离图形基准( LRGB) 和 树木相邻Match 数据集中所显示的过度拥挤问题。 其次, 它们提供了更高的速度和记忆效率, 与节点和边缘数的复杂线, 超过相关的图形变异和直观 GNNN模型。 第三, 它们显示在图表中的高清晰度参数上显示高清晰度, 4Wstographtal 显示我们所有的模型的模型模型中最不具有竞争力基准。