We develop nested variational inference (NVI), a family of methods that learn proposals for nested importance samplers by minimizing an forward or reverse KL divergence at each level of nesting. NVI is applicable to many commonly-used importance sampling strategies and provides a mechanism for learning intermediate densities, which can serve as heuristics to guide the sampler. Our experiments apply NVI to (a) sample from a multimodal distribution using a learned annealing path (b) learn heuristics that approximate the likelihood of future observations in a hidden Markov model and (c) to perform amortized inference in hierarchical deep generative models. We observe that optimizing nested objectives leads to improved sample quality in terms of log average weight and effective sample size.
翻译:我们开发了嵌巢变异推断(NVI),这是一套方法,通过尽量减少或扭转各级巢穴中KL差异,学习巢穴重要采样器的建议。NVI适用于许多常用的重要采样战略,为学习中间密度提供了机制,可作为惯性来指导采样者。我们的实验将NVI应用于(a) 利用一个有学识的编织路径进行多式联运分布的样本;(b) 学习与隐蔽的Markov模型中未来观测可能性相近的超常性研究,以及(c) 在等级深层基因模型中进行摊合性推断。我们观察到,优化嵌套目标可以提高原木平均重量和有效采样规模的样本质量。