To achieve scalable and accurate inference for latent Gaussian processes, we propose a variational approximation based on a family of Gaussian distributions whose covariance matrices have sparse inverse Cholesky (SIC) factors. We combine this variational approximation of the posterior with a similar and efficient SIC-restricted Kullback-Leibler-optimal approximation of the prior. We then focus on a particular SIC ordering and nearest-neighbor-based sparsity pattern resulting in highly accurate prior and posterior approximations. For this setting, our variational approximation can be computed via stochastic gradient descent in polylogarithmic time per iteration. We provide numerical comparisons showing that the proposed double-Kullback-Leibler-optimal Gaussian-process approximation (DKLGP) can sometimes be vastly more accurate than alternative approaches such as inducing-point and mean-field approximations at similar computational complexity.
翻译:为了实现潜潜潜高斯进程的可缩放和准确的推论,我们提出一个基于高斯分布式组合的变近近似值,其共差矩阵的共差基数稀少。我们将后游的这种变近值与之前的类似和高效的SIC限制的Kullback-Lebel-Leubil-optimal近差值结合起来。然后我们把重点放在特定的SIC定序和近邻偏差模式上,从而产生高度准确的前近似和近似近似。对于这一设定,我们的变近近值可以通过多极点时间的随机梯度梯度梯度基数来计算。我们提供数字比较,表明拟议的双千位回-利伯尔-优异-优美的戈斯进程近差值(DKLGP)有时比类似计算复杂性的诱导点和平均近点等替代方法更准确得多。