Variational approximations to Gaussian processes (GPs) typically use a small set of inducing points to form a low-rank approximation to the covariance matrix. In this work, we instead exploit a sparse approximation of the precision matrix. We propose variational nearest neighbor Gaussian process (VNNGP), which introduces a prior that only retains correlations within K nearest-neighboring observations, thereby inducing sparse precision structure. Using the variational framework, VNNGP's objective can be factorized over both observations and inducing points, enabling stochastic optimization with a time complexity of O($K^3$). Hence, we can arbitrarily scale the inducing point size, even to the point of putting inducing points at every observed location. We compare VNNGP to other scalable GPs through various experiments, and demonstrate that VNNGP (1) can dramatically outperform low-rank methods, and (2) is less prone to overfitting than other nearest neighbor methods.
翻译:Gaussian 进程( GPs) 的变式近似值通常使用一小组诱导点来形成对共差矩阵的低级近似值。 在这项工作中,我们只能使用精密矩阵的稀疏近近近似值。 我们建议使用相距较近的 Gaussian 进程( VNNGP 进程) 。 我们建议先使用一个离近的邻居 Gaussian 进程( VNNGP 进程), 该进程在 K 近邻观测中只保留相关关系, 从而产生稀疏的精确结构 。 使用变式框架, VNNGP 的目标可以在观察点和诱导点上进行分解, 使O( $K% 3 $) 的时间复杂性能够实现随机优化 。 因此, 我们可以任意扩大导引点的大小, 甚至在每个观察地点都设定引点 。 我们通过各种实验将 VNNGPP 与其他可缩缩缩缩的GP, 并证明 VNNGP (1) 能够大大超过低级方法,, 以及 (2) 比其他最近的邻差方法容易过。