Graph neural networks (GNNs) are one of the most popular approaches to using deep learning on graph-structured data, and they have shown state-of-the-art performances on a variety of tasks. However, according to a recent study, a careful choice of pooling functions, which are used for the aggregation and readout operations in GNNs, is crucial for enabling GNNs to extrapolate. Without proper choices of pooling functions, which varies across tasks, GNNs completely fail to generalize to out-of-distribution data, while the number of possible choices grows exponentially with the number of layers. In this paper, we present GNP, a $L^p$ norm-like pooling function that is trainable end-to-end for any given task. Notably, GNP generalizes most of the widely-used pooling functions. We verify experimentally that simply using GNP for every aggregation and readout operation enables GNNs to extrapolate well on many node-level, graph-level, and set-related tasks; and GNP sometimes performs even better than the best-performing choices among existing pooling functions.
翻译:脑电图网络是利用图表结构数据深层学习的最受欢迎的方法之一,它们显示了各种任务的最新表现。但是,根据最近的一项研究,仔细选择集合功能,用于GNN的汇总和读出操作,对于使GNN能够外推推计算至关重要。没有适当选择各种任务不同的集合功能,GNN完全无法概括出分配数据,而可能的选择数量随着层次的增多而成倍增长。在本文件中,我们提出了GNP,一个类似于GNP的规范式集合功能,对任何特定任务来说都是可培训的端对端。值得注意的是,GNP一般地概括了大多数广泛使用的集合功能。我们实验性地核实,只要使用GNP,就可以使GNP在许多节级、图级和定型任务上进行外推,而GNP有时比现有联合功能中的最佳表现选择要好。