Scalable Gaussian process (GP) inference is essential for sequential decision-making tasks, yet improving GP scalability remains a challenging problem with many open avenues of research. This paper focuses on iterative GPs, where iterative linear solvers, such as conjugate gradients, stochastic gradient descent or alternative projections, are used to approximate the GP posterior. We propose a new method which improves solver convergence of a large linear system by leveraging the known solution to a smaller system contained within. This is significant for tasks with incremental data additions, and we show that our technique achieves speed-ups when solving to tolerance, as well as improved Bayesian optimisation performance under a fixed compute budget.
翻译:可扩展的高斯过程(GP)推断对于序列决策任务至关重要,然而提升GP的可扩展性仍是一个具有诸多开放研究方向的前沿难题。本文聚焦于迭代高斯过程,其中采用迭代线性求解器(如共轭梯度法、随机梯度下降法或交替投影法)来近似GP后验分布。我们提出一种新方法,通过利用已知的、包含在较大系统内的较小线性系统的解,来加速大型线性系统的求解器收敛。该方法对于增量数据添加的任务具有重要意义,我们证明该技术在达到给定容差时能实现计算加速,并在固定计算预算下提升贝叶斯优化性能。