Approximations to Gaussian processes based on inducing variables, combined with variational inference techniques, enable state-of-the-art sparse approaches to infer GPs at scale through mini batch-based learning. In this work, we address one limitation of sparse GPs, which is due to the challenge in dealing with a large number of inducing variables without imposing a special structure on the inducing inputs. In particular, we introduce a novel hierarchical prior, which imposes sparsity on the set of inducing variables. We treat our model variationally, and we experimentally show considerable computational gains compared to standard sparse GPs when sparsity on the inducing variables is realized considering the nearest inducing inputs of a random mini-batch of the data. We perform an extensive experimental validation that demonstrates the effectiveness of our approach compared to the state-of-the-art. Our approach enables the possibility to use sparse GPs using a large number of inducing points without incurring a prohibitive computational cost.
翻译:基于诱导变数的戈森过程,加上变异推断技术,我们根据诱导变数对戈西亚过程的赞同程度,通过小型批量学习,使最先进的稀疏方法能够通过小型批量学习大规模地推导GPs。在这项工作中,我们处理一个稀疏的GPs的局限性,这是由于在处理大量诱导变数时遇到的挑战,而不对诱导输入设置一个特殊结构。特别是,我们以前引入了一个新的等级,这在诱导变数的一组变数上造成偏差。我们用模型进行变异处理,并且我们实验性地显示,与标准的稀疏多的GPs相比,当考虑到最接近的随机微批数据的引导输入量时,我们实现了对诱导变数的简单化变数,我们进行了广泛的实验性验证,表明我们的方法相对于引引素输入的先进数与最新技术的比较是有效的。我们的方法使得利用大量诱导点而不会产生令人望的计算成本,能够使用稀疏漏的GPs。