Sequential Monte Carlo (SMC) is an inference algorithm for state space models that approximates the posterior by sampling from a sequence of target distributions. The target distributions are often chosen to be the filtering distributions, but these ignore information from future observations, leading to practical and theoretical limitations in inference and model learning. We introduce SIXO, a method that instead learns targets that approximate the smoothing distributions, incorporating information from all observations. The key idea is to use density ratio estimation to fit functions that warp the filtering distributions into the smoothing distributions. We then use SMC with these learned targets to define a variational objective for model and proposal learning. SIXO yields provably tighter log marginal lower bounds and offers significantly more accurate posterior inferences and parameter estimates in a variety of domains.
翻译:连续的蒙特卡洛(SMC)是国家空间模型的推论算法,它通过从一系列目标分布分布中取样来接近后部。目标分布往往被选择为过滤分布,但这些分布忽略了未来观测中的信息,导致推论和模型学习中的实际和理论限制。我们引入了SIXO, 这种方法可以学习接近平滑分布的目标, 包括所有观测中的信息。 关键的想法是使用密度比估计来适应将过滤分布扭曲到平滑分布中的功能。 我们随后使用SMC和这些学习的目标来界定模型和提议学习的变异目标。 SIXO 产生可察觉的更紧密的日志边际下界, 并在多个领域提供更准确的远光谱推法和参数估计。