Neural networks and Gaussian processes are complementary in their strengths and weaknesses. Having a better understanding of their relationship comes with the promise to make each method benefit from the strengths of the other. In this work, we establish an equivalence between the forward passes of neural networks and (deep) sparse Gaussian process models. The theory we develop is based on interpreting activation functions as interdomain inducing features through a rigorous analysis of the interplay between activation functions and kernels. This results in models that can either be seen as neural networks with improved uncertainty prediction or deep Gaussian processes with increased prediction accuracy. These claims are supported by experimental results on regression and classification datasets.
翻译:神经网络和高斯过程的长处和短处是相辅相成的。 更好地了解它们之间的关系,就有可能使每种方法获益于其他方法的长处。 在这项工作中,我们在神经网络的远端传输和(深)稀疏高斯过程模型之间建立了等值。 我们开发的理论的基础是通过严格分析激活功能和内核之间的相互作用,将激活功能解释为内在诱导特征。 这导致模型可以被视为神经网络,预测不确定性会得到改善,或深戈斯过程预测准确度会提高。 这些主张得到回归和分类数据集实验结果的支持。