We develop an optimization algorithm suitable for Bayesian learning in complex models. Our approach relies on natural gradient updates within a general black-box framework for efficient training with limited model-specific derivations. It applies within the class of exponential-family variational posterior distributions, for which we extensively discuss the Gaussian case for which the updates have a rather simple form. Our Quasi Black-box Variational Inference (QBVI) framework is readily applicable to a wide class of Bayesian inference problems and is of simple implementation as the updates of the variational posterior do not involve gradients with respect to the model parameters, nor the prescription of the Fisher information matrix. We develop QBVI under different hypotheses for the posterior covariance matrix, discuss details about its robust and feasible implementation, and provide a number of real-world applications to demonstrate its effectiveness.
翻译:我们开发了适合贝叶斯人复杂模型学习的优化算法。 我们的方法依赖于在通用黑箱框架内的自然梯度更新,以便进行高效培训,且只有有限的模型衍生数据。 它适用于指数-家庭变异后院分布类别,为此我们广泛讨论了Gaussian案,而更新数据具有相当简单的形式。 我们的Qasi黑盒变异推理(QBVI)框架很容易适用于广泛的巴伊西亚推理问题,并且可以简单实施,因为变异后院的更新并不涉及模型参数的梯度,也不涉及渔业信息矩阵的处方。 我们在不同假设下为后院共变式矩阵开发QBVI,讨论其可靠和可行的实施细节,并提供一系列真实世界应用来证明其有效性。