Gaussian processes (GPs) are an attractive class of machine learning models because of their simplicity and flexibility as building blocks of more complex Bayesian models. Meanwhile, graph neural networks (GNNs) emerged recently as a promising class of models for graph-structured data in semi-supervised learning and beyond. Their competitive performance is often attributed to a proper capturing of the graph inductive bias. In this work, we introduce this inductive bias into GPs to improve their predictive performance for graph-structured data. We show that a prominent example of GNNs, the graph convolutional network, is equivalent to some GP when its layers are infinitely wide; and we analyze the kernel universality and the limiting behavior in depth. We further present a programmable procedure to compose covariance kernels inspired by this equivalence and derive example kernels corresponding to several interesting members of the GNN family. We also propose a computationally efficient approximation of the covariance matrix for scalable posterior inference with large-scale data. We demonstrate that these graph-based kernels lead to competitive classification and regression performance, as well as advantages in computation time, compared with the respective GNNs.
翻译:高斯进程( GPs) 是一个有吸引力的机器学习模型类别, 因为它们是较复杂的贝叶西亚模型的构件。 同时, 图形神经网络( GNNS) 在最近作为半监督学习和以后的图形结构数据的模型类别出现。 它们的竞争性性能常常归因于对图形的感应偏差的正确捕捉。 在这项工作中, 我们向 GPs 引入这种感应偏差, 以提高图形结构数据的预测性能。 我们显示GNS( 图形共振网络) 的突出例子, 相当于某些GP, 当其层面无限宽时; 我们深入分析内核的普遍性和限制行为。 我们进一步展示了一种可编程程序, 以兼容受此等等同启发的相形形形形形形形形形形色色, 并引出与GNNN家族中几个有趣的成员对应的示例内核。 我们还提议对可缩放的后子变式矩阵作一个计算效率的近似近。 我们展示了这些基于图形的内核的模型的模型, 将带来竞争性的变法化, 和GNNNC 的变法作为各自的变法的优势, 。