Graph transformers have emerged as a promising architecture for a variety of graph learning and representation tasks. Despite their successes, though, it remains challenging to scale graph transformers to large graphs while maintaining accuracy competitive with message-passing networks. In this paper, we introduce Exphormer, a framework for building powerful and scalable graph transformers. Exphormer consists of a sparse attention mechanism based on two mechanisms: virtual global nodes and expander graphs, whose mathematical characteristics, such as spectral expansion, pseduorandomness, and sparsity, yield graph transformers with complexity only linear in the size of the graph, while allowing us to prove desirable theoretical properties of the resulting transformer models. We show that incorporating \textsc{Exphormer} into the recently-proposed GraphGPS framework produces models with competitive empirical results on a wide variety of graph datasets, including state-of-the-art results on three datasets. We also show that \textsc{Exphormer} can scale to datasets on larger graphs than shown in previous graph transformer architectures. Code can be found at https://github.com/hamed1375/Exphormer.
翻译:图表变压器已成为各种图表学习和代表任务的有希望的结构。 尽管它们取得了成功, 但是在将图形变压器缩到大图表上时, 仍然具有挑战性, 同时又能保持对信件传递网络的准确性竞争力。 在本文中, 我们引入了Exphormer, 这是建设强大和可缩放的图形变压器的框架。 Exphormer 是一个分散的关注机制, 它基于两个机制: 虚拟全球节点和扩展图, 其数学特性, 如光谱扩展、 普塞多兰度和广度, 生成的图形变压器在图形大小中只有线性的复杂度, 同时让我们能够证明由此产生的变压器模型的理论性。 我们显示, 将\ textsc{Exphormer} 纳入最新提议的图形GAGPS 框架, 生成模型, 并具有各种图表数据集的竞争性经验结果, 包括三个数据集的状态和艺术结果 。 我们还显示,\ textsc{Expororomormermer} 可以在比前图变压结构中显示的更大比例的图表上显示, 代码。 。 。 可以在 http:// amhammum5/ 。</s>