Gaussian processes (GPs) are the main surrogate functions used for sequential modelling such as Bayesian Optimization and Active Learning. Their drawbacks are poor scaling with data and the need to run an optimization loop when using a non-Gaussian likelihood. In this paper, we focus on `fantasizing' batch acquisition functions that need the ability to condition on new fantasized data computationally efficiently. By using a sparse Dual GP parameterization, we gain linear scaling with batch size as well as one-step updates for non-Gaussian likelihoods, thus extending sparse models to greedy batch fantasizing acquisition functions.
翻译:Gaussian 进程( GPs) 是用于Bayesian 优化和积极学习等连续建模的主要代用功能。 它们的缺点是数据缩放差, 使用非 Gauussian 可能性时需要运行一个优化循环。 在本文中, 我们侧重于“ 粉刷” 批次获取功能, 需要有能力以新的扇形数据高效率地进行计算。 通过使用稀有的双重GP参数化, 我们获得了批量规模的线性缩放以及非 Gaussian 可能性的一步更新, 从而将稀有模型推广到贪婪的批次幻想获取功能 。