Variational inference is a powerful paradigm for approximate Bayesian inference with a number of appealing properties, including support for model learning and data subsampling. By contrast MCMC methods like Hamiltonian Monte Carlo do not share these properties but remain attractive since, contrary to parametric methods, MCMC is asymptotically unbiased. For these reasons researchers have sought to combine the strengths of both classes of algorithms, with recent approaches coming closer to realizing this vision in practice. However, supporting data subsampling in these hybrid methods can be a challenge, a shortcoming that we address by introducing a surrogate likelihood that can be learned jointly with other variational parameters. We argue theoretically that the resulting algorithm permits the user to make an intuitive trade-off between inference fidelity and computational cost. In an extensive empirical comparison we show that our method performs well in practice and that it is well-suited for black-box inference in probabilistic programming frameworks.
翻译:相形之下,汉密尔顿·蒙特卡洛(Hamiltonian Monte Carlo)等MCMCM方法并不分享这些属性,但依然具有吸引力,因为与参数学方法相反,MCMC是绝对不带偏见的。由于这些原因,研究人员试图将这两种算法的长处结合起来,而最近的做法更接近于在实践中实现这一愿景。然而,在这些混合方法中支持数据子取样可能是一个挑战,我们通过引入可与其他变式参数共同学习的替代可能性来解决这一缺陷。我们理论上认为,由此产生的算法允许用户在推断真实性和计算成本之间作出不直觉的权衡。在广泛的实证比较中,我们表明我们的方法在实践中运作良好,在概率性规划框架中,黑箱推断很适合。