Graph neural networks (GNNs) have been shown to be highly sensitive to the choice of aggregation function. While summing over a node's neighbours can approximate any permutation-invariant function over discrete inputs, Cohen-Karlik et al. [2020] proved there are set-aggregation problems for which summing cannot generalise to unbounded inputs, proposing recurrent neural networks regularised towards permutation-invariance as a more expressive aggregator. We show that these results carry over to the graph domain: GNNs equipped with recurrent aggregators are competitive with state-of-the-art permutation-invariant aggregators, on both synthetic benchmarks and real-world problems. However, despite the benefits of recurrent aggregators, their $O(V)$ depth makes them both difficult to parallelise and harder to train on large graphs. Inspired by the observation that a well-behaved aggregator for a GNN is a commutative monoid over its latent space, we propose a framework for constructing learnable, commutative, associative binary operators. And with this, we construct an aggregator of $O(\log V)$ depth, yielding exponential improvements for both parallelism and dependency length while achieving performance competitive with recurrent aggregators. Based on our empirical observations, our proposed learnable commutative monoid (LCM) aggregator represents a favourable tradeoff between efficient and expressive aggregators.
翻译:显示对聚合功能的选择高度敏感。 在对一个节点的邻居进行汇总时, Cohen-Karlik 等人(2020年) 证明存在固定的聚合问题,因为对不独立的输入物进行汇总无法概括到不封闭的投入物,建议经常的神经网络定期地成为变异的聚合器。我们显示,这些结果会传到图形域:配有经常性聚合器的GNNS与最先进的快速变异变异聚合器相比,在合成基准和现实世界问题方面都具有竞争力。然而,尽管经常的聚合器的好处,它们的美元(V)深度使得它们难以与大图表相平行,更难于在大图表上进行培训。我们发现,为GNNE配置一个精密的聚合器(GNNM是一个针对其潜在空间的混合单项),我们提议了一个框架,用于在可学习性、可交流性、可变异的经常变变异的聚合变异聚合聚合聚合聚合聚合聚合聚合聚合聚合聚合聚合体, 并且这个框架,用于在不断增长的轨道上构建一个可以学习、可复制的磁性硬性硬性硬性硬性硬性硬操作操作者。