A key challenge in scaling Gaussian Process (GP) regression to massive datasets is that exact inference requires computation with a dense n x n kernel matrix, where n is the number of data points. Significant work focuses on approximating the kernel matrix via interpolation using a smaller set of m inducing points. Structured kernel interpolation (SKI) is among the most scalable methods: by placing inducing points on a dense grid and using structured matrix algebra, SKI achieves per-iteration time of O(n + m log m) for approximate inference. This linear scaling in n enables inference for very large data sets; however the cost is per-iteration, which remains a limitation for extremely large n. We show that the SKI per-iteration time can be reduced to O(m log m) after a single O(n) time precomputation step by reframing SKI as solving a natural Bayesian linear regression problem with a fixed set of m compact basis functions. With per-iteration complexity independent of the dataset size n for a fixed grid, our method scales to truly massive data sets. We demonstrate speedups in practice for a wide range of m and n and apply the method to GP inference on a three-dimensional weather radar dataset with over 100 million points.
翻译:将高斯进程(GP)回归缩放到大数据集方面的一个关键挑战是,精确的推论要求用密度 nx n n 内核矩阵进行计算,这里是数据点的数量。 重要的工作重点是使用较小规模的引导点进行内插,以接近内核矩阵。 结构内核内核内核(SKI)是最可缩放的方法之一: 通过将诱导点放置在稠密的网格上,并使用结构化矩阵代数, SKI 达到O(n + mlog m) 的实时回归时间, 以近似推导力。 这种线性缩放的缩放使得能够推断非常大的数据集; 但是, 成本是一次推算, 这仍然是对极大的引导点的限制。 我们表明, 在单一的O(n) 后, 通过重新配置SKIFI, 通过重新配置SKI, 解决自然巴伊斯线性回归问题, 并设定一套固定的缩压性功能。 与数据设置的复杂度独立, 将数据测得的精确度的精确度, 将数据测算的精确度的精确度比, 和精确测测测测测测测到一个精确度的三的测距的测距, 的测距, 的测测测距的测测测距的测到一个的测距的测距, 的测距的测距的测为100 的精确度的测距的测至一个的测距的测距, 的测距的测距, 的测到一个的测距的测距定的测距的测距的测距的测距的测距的测距的测距定的测距的测距的测距的测得的测得的测得的测得的测得的测距。