Deep Gaussian processes (DGPs) have struggled for relevance in applications due to the challenges and cost associated with Bayesian inference. In this paper we propose a sparse variational approximation for DGPs for which the approximate posterior mean has the same mathematical structure as a Deep Neural Network (DNN). We make the forward pass through a DGP equivalent to a ReLU DNN by finding an interdomain transformation that represents the GP posterior mean as a sum of ReLU basis functions. This unification enables the initialisation and training of the DGP as a neural network, leveraging the well established practice in the deep learning community, and so greatly aiding the inference task. The experiments demonstrate improved accuracy and faster training compared to current DGP methods, while retaining favourable predictive uncertainties.
翻译:深海高斯进程(DGPs)由于贝叶斯推理方面的挑战和成本,在应用相关性方面挣扎不已。在本文中,我们提议对DGP提出一个微小的变近近似值,其近似后向值与深神经网络(DNN)具有相同的数学结构。我们通过一个相当于ReLU DNN的DGP通过一个内向变异,它代表GP的后向值,作为RELU基础功能的总和。这一统一使得DGP能够启动和培训一个神经网络,利用深层学习界的既定做法,从而极大地协助推导任务。 实验显示,与目前的DGP方法相比,准确性和培训速度更高,同时保留了有利的预测不确定性。