Gaussian process training decomposes into inference of the (approximate) posterior and learning of the hyperparameters. For non-Gaussian (non-conjugate) likelihoods, two common choices for approximate inference are Expectation Propagation (EP) and Variational Inference (VI), which have complementary strengths and weaknesses. While VI's lower bound to the marginal likelihood is a suitable objective for inferring the approximate posterior, it does not automatically imply it is a good learning objective for hyperparameter optimization. We design a hybrid training procedure where the inference leverages conjugate-computation VI and the learning uses an EP-like marginal likelihood approximation. We empirically demonstrate on binary classification that this provides a good learning objective and generalizes better.
翻译:Gausian 过程培训分解为(近似)后表和超参数的学习的推论。对于非Gausian(非同质)可能性,两种关于近似推论的共同选择是具有互补优势和弱点的预期推导和变异推论(VI)。虽然六六的下限与边际可能性之间的界限是推算近似后表的适当目标,但并不自动意味着它是超光谱优化的良好学习目标。我们设计了一个混合培训程序,在其中,推论利用同质推论六和学习使用类似于EP的边际概率近似。我们在二元分类上的经验证明,这提供了良好的学习目标,并更好地概括了。