We present a variational inference (VI) framework that unifies and leverages sequential Monte-Carlo (particle filtering) with \emph{approximate} rejection sampling to construct a flexible family of variational distributions. Furthermore, we augment this approach with a resampling step via Bernoulli race, a generalization of a Bernoulli factory, to obtain a low-variance estimator of the marginal likelihood. Our framework, Variational Rejection Particle Filtering (VRPF), leads to novel variational bounds on the marginal likelihood, which can be optimized efficiently with respect to the variational parameters and generalizes several existing approaches in the VI literature. We also present theoretical properties of the variational bound and demonstrate experiments on various models of sequential data, such as the Gaussian state-space model and variational recurrent neural net (VRNN), on which VRPF outperforms various existing state-of-the-art VI methods.
翻译:我们提出了一个变式推论框架(VI),它统一并利用连续的蒙特-卡洛(粒子过滤)(粒子过滤)来进行\emph{近似)拒绝取样,以构建一个灵活的变式分布式组合。此外,我们通过伯努利种族(伯努利工厂的概括化)重新标注一个步骤,即伯努利工厂,以获得一个低变式概率估测边际可能性。我们的框架(变式拒绝粒子过滤(VRPF))导致对边际可能性的新的变式界限,在变式参数方面可以优化,并概括六文献中现有的几种方法。我们还介绍了变式界限的理论属性,并演示了各种相继数据模型的实验,如高斯州空间模型和变式中继神经网(VRNNN),而VRPF则超越了现有的各种六种状态方法。