Sparse variational Gaussian process (SVGP) methods are a common choice for non-conjugate Gaussian process inference because of their computational benefits. In this paper, we improve their computational efficiency by using a dual parameterization where each data example is assigned dual parameters, similarly to site parameters used in expectation propagation. Our dual parameterization speeds-up inference using natural gradient descent, and provides a tighter evidence lower bound for hyperparameter learning. The approach has the same memory cost as the current SVGP methods, but it is faster and more accurate.
翻译:Gaussian 偏差过程方法( SVGP) 是非 conjugate Gaussian 过程推论的常见选择, 因为它们的计算效益。 在本文中, 我们通过使用双参数化来提高它们的计算效率, 每个数据示例都配有双重参数, 类似于用于预期传播的站点参数。 我们的双参数化利用自然梯度下降加速推论, 并提供较紧的证据, 用于超参数学习。 这种方法的内存成本与当前 SVGP 方法相同, 但速度更快, 更准确 。