Variational Inference (VI) is a popular alternative to asymptotically exact sampling in Bayesian inference. Its main workhorse is optimization over a reverse Kullback-Leibler divergence (RKL), which typically underestimates the tail of the posterior leading to miscalibration and potential degeneracy. Importance sampling (IS), on the other hand, is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures. The quality of IS crucially depends on the choice of the proposal distribution. Ideally, the proposal distribution has heavier tails than the target, which is rarely achievable by minimizing the RKL. We thus propose a novel combination of optimization and sampling techniques for approximate Bayesian inference by constructing an IS proposal distribution through the minimization of a forward KL (FKL) divergence. This approach guarantees asymptotic consistency and a fast convergence towards both the optimal IS estimator and the optimal variational approximation. We empirically demonstrate on real data that our method is competitive with variational boosting and MCMC.
翻译:在Bayesian 的推论中,变化性推断(VI)是非现成精确抽样的流行替代物,其主要工作马是针对反向的Kullback-Leebler差异(RKL)优化,这种差异通常低估后背体的尾部,导致偏差和潜在的降解。另一方面,重要取样(IS)往往用于微调和贬低近似Bayesian 推论程序的估计。IS的质量关键取决于提案分布的选择。理想的是,建议书分布的尾部比目标要重,而后者很少通过尽量减少RKL实现。因此,我们提议通过尽量减少前方KL(FKL)差异来构建IS建议分布的优化和抽样技术新颖组合,从而将近似Bayesian 的误判方法结合起来。这种方法保证了药性一致性,并迅速接近最佳的IS估计和最佳变校准。我们从实际数据上证明,我们的方法与变异推进和MC具有竞争力。