Despite advances in scalable models, the inference tools used for Gaussian processes (GPs) have yet to fully capitalize on developments in computing hardware. We present an efficient and general approach to GP inference based on Blackbox Matrix-Matrix multiplication (BBMM). BBMM inference uses a modified batched version of the conjugate gradients algorithm to derive all terms for training and inference in a single call. BBMM reduces the asymptotic complexity of exact GP inference from $O(n^3)$ to $O(n^2)$. Adapting this algorithm to scalable approximations and complex GP models simply requires a routine for efficient matrix-matrix multiplication with the kernel and its derivative. In addition, BBMM uses a specialized preconditioner to substantially speed up convergence. In experiments we show that BBMM effectively uses GPU hardware to dramatically accelerate both exact GP inference and scalable approximations. Additionally, we provide GPyTorch, a software platform for scalable GP inference via BBMM, built on PyTorch.
翻译:尽管在可扩缩模型方面有所进步,但Gaussian工艺(GPs)使用的推论工具尚未充分利用计算机硬件的发展,我们根据黑盒矩阵-Matrix乘法(BBMM),对GP推论提出了高效和一般的方法。BBMM推论使用经修改的分批版本的同源梯算法来得出单调培训和推论的所有条件。BBMMM将精确的GP推论从$O(n)3减到$O(n)2美元,将精确的GP推论的不便复杂性从$O(n)3减到$O(n)2美元。将这一算法调整为可伸缩近似值和复杂的GPMs模型只需要与内核及其衍生物高效的矩阵矩阵比照的例行程序。此外,BMMM(BBM)使用专门的先决条件来大大加快趋同速度。在实验中,我们证明BMMM公司有效地使用GPU硬件来大大加速精确的GPP推断和可伸缩的近近近。此外,我们提供了GyTorch的软件平台,通过BMMM(BMM)提供可伸缩入的可伸缩的可伸缩的软件平台。