We build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models. Our approach relies on the efficiency of sequential Monte Carlo (SMC) for performing inference in structured probabilistic models and the flexibility of deep neural networks to model complex conditional probability distributions. We develop additional theoretical insights and introduce a new training procedure which improves both model and proposal learning. We demonstrate that our approach provides a fast, easy-to-implement and scalable means for simultaneous model learning and proposal adaptation in deep generative models.
翻译:我们以自动编码相继的蒙特卡洛(AESMC)为基础,在结构化概率模型组成的广泛大家庭中最大限度地降低对原木的边际可能性的基础上,以模型和提案学习方法为基础,我们的方法依靠连续的蒙特卡洛(SMC)在结构化概率模型中进行推论的效率,以及深神经网络在模拟复杂的有条件概率分布方面的灵活性。我们开发了更多的理论见解,并引入了新的培训程序,改进了模型和提案的学习。我们证明,我们的方法为在深层基因模型中同时进行模型学习和提案调整提供了快速、易于执行和可扩展的手段。