Amortised inference enables scalable learning of sequential latent-variable models (LVMs) with the evidence lower bound (ELBO). In this setting, variational posteriors are often only partially conditioned. While the true posteriors depend, e.g., on the entire sequence of observations, approximate posteriors are only informed by past observations. This mimics the Bayesian filter -- a mixture of smoothing posteriors. Yet, we show that the ELBO objective forces partially-conditioned amortised posteriors to approximate products of smoothing posteriors instead. Consequently, the learned generative model is compromised. We demonstrate these theoretical findings in three scenarios: traffic flow, handwritten digits, and aerial vehicle dynamics. Using fully-conditioned approximate posteriors, performance improves in terms of generative modelling and multi-step prediction.
翻译:算法推论可以使相继潜变模型(LVMs)与证据约束较低(ELBO)的相继潜在变异模型(LVMs)的可缩放学习。 在这种背景下,变异后继体往往只是部分受限。虽然真正的后继体取决于整个观测序列,但近似后继体只能从过去的观测中得知。这模仿了巴耶斯过滤器,这是平滑后继体的混合体。然而,我们显示ELBO目标将部分附加条件的摊合后继体引向光滑后继体的近似产品。因此,所学的基因化模型被破坏。我们在三种假设中展示了这些理论结论:交通流量、手写数字和航空飞行器动态。使用完全固定的近似后继体,在基因建模和多步预测方面表现得到改善。