Graph neural networks (GNNs) have recently achieved state-of-the-art performance in many graph-based applications. Despite the high expressive power, they typically need to perform an expensive recursive neighborhood expansion in multiple training epochs and face a scalability issue. Moreover, most of them are inflexible since they are restricted to fixed-hop neighborhoods and insensitive to actual receptive field demands for different nodes. We circumvent these limitations by introducing a scalable and flexible Graph Attention Multilayer Perceptron (GAMLP). With the separation of the non-linear transformation and feature propagation, GAMLP significantly improves the scalability and efficiency by performing the propagation procedure in a pre-compute manner. With three principled receptive field attention, each node in GAMLP is flexible and adaptive in leveraging the propagated features over the different sizes of reception field. We conduct extensive evaluations on the three large open graph benchmarks (e.g., ogbn-papers100M, ogbn-products and ogbn-mag), demonstrating that GAMLP not only achieves the state-of-art performance, but also additionally provide high scalability and efficiency.
翻译:在许多基于图形的应用程序中,图形神经网络(GNNs)最近取得了最先进的性能。尽管表现力很高,但它们通常需要在多个培训时代进行昂贵的循环社区扩张,并面临可扩缩的问题。此外,它们大多数都是僵硬的,因为它们局限于固定热点街区,对不同节点的实际可容纳场需求不敏感。我们通过引入可缩放和灵活的图形关注多层受控器(GAMLP)来规避这些限制。随着非线性变换和地貌传播的分离,GAMLP以预先计算的方式执行传播程序,大大提高了可扩缩性和效率。GAMLP的每个节点都具有三个原则性、可接受性的实地关注点,在利用不同面积的接收场上传播的特性方面是灵活和适应的。我们对三大开放图形基准(e.g.、ogbn-papers100M、ogb-droduces和ogbn-mag)进行了广泛的评价。显示GAMLP不仅实现了州级的可实现性,而且还提供了额外性高可控性。