While graph kernels (GKs) are easy to train and enjoy provable theoretical guarantees, their practical performances are limited by their expressive power, as the kernel function often depends on hand-crafted combinatorial features of graphs. Compared to graph kernels, graph neural networks (GNNs) usually achieve better practical performance, as GNNs use multi-layer architectures and non-linear activation functions to extract high-order information of graphs as features. However, due to the large number of hyper-parameters and the non-convex nature of the training procedure, GNNs are harder to train. Theoretical guarantees of GNNs are also not well-understood. Furthermore, the expressive power of GNNs scales with the number of parameters, and thus it is hard to exploit the full power of GNNs when computing resources are limited. The current paper presents a new class of graph kernels, Graph Neural Tangent Kernels (GNTKs), which correspond to infinitely wide multi-layer GNNs trained by gradient descent. GNTKs enjoy the full expressive power of GNNs and inherit advantages of GKs. Theoretically, we show GNTKs provably learn a class of smooth functions on graphs. Empirically, we test GNTKs on graph classification datasets and show they achieve strong performance.
翻译:虽然图形内核(GKs)很容易培训和享有可变的理论保障,但其实际性能受到其表达力的限制,因为内核功能往往取决于图形的手工制作组合特性。与图形内核相比,图形内核网络通常能取得更好的实际性能,因为GNS使用多层结构和非线性激活功能来提取图表高端信息作为特征。然而,由于超参数数量之多,且培训程序非软质性质之强,GNNS更难培训。GNNS的理论性能保障也往往取决于图形的手工制作组合特性。此外,GNNNS规模与参数数相比,因此,当计算资源有限时很难充分利用GNNS的全部能力。目前的文件展示了一种新的图表内核网络内核(GNTKs)类别,它们与由梯级的血统培养的无限强大的多级GNNS的多级GNS特性相对应, GNTKs 展示了我们在GKSelevel Testroups上平稳地学习GNT的G的GNT的成绩。