We formulate natural gradient variational inference (VI), expectation propagation (EP), and posterior linearisation (PL) as extensions of Newton's method for optimising the parameters of a Bayesian posterior distribution. This viewpoint explicitly casts inference algorithms under the framework of numerical optimisation. We show that common approximations to Newton's method from the optimisation literature, namely Gauss-Newton and quasi-Newton methods (e.g., the BFGS algorithm), are still valid under this 'Bayes-Newton' framework. This leads to a suite of novel algorithms which are guaranteed to result in positive semi-definite (PSD) covariance matrices, unlike standard VI and EP. Our unifying viewpoint provides new insights into the connections between various inference schemes. All the presented methods apply to any model with a Gaussian prior and non-conjugate likelihood, which we demonstrate with (sparse) Gaussian processes and state space models.
翻译:我们将自然梯度变异推断(VI),预期传播(EP)和后线线化(PL)作为牛顿优化巴伊西亚后方分布参数的方法的延伸。 这个观点在数字优化的框架内明确给出了推论算法。 我们从优化文献中,即Gaus-Newton和准Newton方法(例如BFGS算法)显示,牛顿方法的共同近似值在“Bayes-Newton”框架内仍然有效。 这导致一套新颖的算法,保证产生正半确定性(PSD)共变量矩阵,与标准六和EP不同。 我们的统一观点为各种推论方案之间的联系提供了新的洞察力。 我们提出的所有方法都适用于任何带有高斯先前和不协调可能性的模型,我们用(偏差)高斯进程和州空间模型演示。