The dominant paradigm for machine learning on graphs uses Message Passing Graph Neural Networks (MP-GNNs), in which node representations are updated by aggregating information in their local neighborhood. Recently, there have been increasingly more attempts to adapt the Transformer architecture to graphs in an effort to solve some known limitations of MP-GNN. A challenging aspect of designing Graph Transformers is integrating the arbitrary graph structure into the architecture. We propose Graph Diffuser (GD) to address this challenge. GD learns to extract structural and positional relationships between distant nodes in the graph, which it then uses to direct the Transformer's attention and node representation. We demonstrate that existing GNNs and Graph Transformers struggle to capture long-range interactions and how Graph Diffuser does so while admitting intuitive visualizations. Experiments on eight benchmarks show Graph Diffuser to be a highly competitive model, outperforming the state-of-the-art in a diverse set of domains.
翻译:在图形上进行机器学习的主导模式是使用信息传递图像神经网络(MP-GNNs),在图形中,节点表示通过汇总当地周边的信息来更新。最近,人们越来越多地尝试将变换器结构调整为图形,以努力解决MP-GNN的某些已知局限性。设计图变器的一个挑战性方面是将任意图形结构纳入结构中。我们建议图形Diffuser(GD)来应对这一挑战。GD学会在图形中提取遥远节点之间的结构和定位关系,然后用它来引导变换器的注意力和节点表示。我们证明现有的GNNS和图变换器在努力捕捉远程互动和图变换器如何这样做,同时接受直观的视觉化。对八个基准的实验显示,图形Diffuser是一个高度竞争的模型,在一系列不同领域优于最新技术。</s>