Common wisdom in the graph neural network (GNN) community dictates that anisotropic models -- in which messages sent between nodes are a function of both the source and target node -- are required to achieve state-of-the-art performance. Benchmarks to date have demonstrated that these models perform better than comparable isotropic models -- where messages are a function of the source node only. In this work we provide empirical evidence challenging this narrative: we propose an isotropic GNN, which we call Efficient Graph Convolution (EGC), that consistently outperforms comparable anisotropic models, including the popular GAT or PNA architectures by using spatially-varying adaptive filters. In addition to raising important questions for the GNN community, our work has significant real-world implications for efficiency. EGC achieves higher model accuracy, with lower memory consumption and latency, along with characteristics suited to accelerator implementation, while being a drop-in replacement for existing architectures. As an isotropic model, it requires memory proportional to the number of vertices in the graph ($\mathcal{O}(V)$); in contrast, anisotropic models require memory proportional to the number of edges ($\mathcal{O}(E)$). We demonstrate that EGC outperforms existing approaches across 6 large and diverse benchmark datasets, and conclude by discussing questions that our work raise for the community going forward. Code and pretrained models for our experiments are provided at https://github.com/shyam196/egc.
翻译:图形神经网络( GNN) 群中常见的智慧表明,为了实现最先进的性能,需要使用那些在节点和目标节点之间发送的信息,在图表神经网络( GNN) 群群群群中,共同的智慧表明,为了实现最新水平的性能,需要使用那些在源节点之间传递的信息。迄今为止的基准已经表明,这些模型的性能优于可比的性能模型 -- -- 其中信息只是源节点的功能。在这项工作中,我们提供了对这个描述提出挑战的经验证据:我们建议了一个异向性GNNN(我们称之为高效的图表组合组合(EGC),这种模式始终优于可比的反向异性模型,包括流行的GAT或PNA(流行的GAT或PNA)结构。除了给GNNEN社区提出重要的问题之外,我们的工作对效率具有重要的现实世界影响。 EGC实现了更高的模型精度,而记忆消耗量和光度较低,同时具有适合加速执行的特性,同时对现有的结构进行低位替换。 作为一种直观模型, 它需要与在图表中进行量级社区间对量的脊流的对比; 在图表中, 和直径前端点中, 我们的模型中, 需要一个直径前端的校的校的校的校的校的校的校程的校正的校正的校正的校正的校正的校正的校正的校正的校正的校正的校对。