We perform approximate inference in state-space models with nonlinear state transitions. Without parameterizing a generative model, we apply Bayesian update formulas using a local linearity approximation parameterized by neural networks. It comes accompanied by a maximum likelihood objective that requires no supervision via uncorrupt observations or ground truth latent states. The optimization backpropagates through a recursion similar to the classical Kalman filter and smoother. Additionally, using an approximate conditional independence, we can perform smoothing without having to parameterize a separate model. In scientific applications, domain knowledge can give a linear approximation of the latent transition maps, which we can easily incorporate into our model. Usage of such domain knowledge is reflected in excellent results (despite our model's simplicity) on the chaotic Lorenz system compared to fully supervised and variational inference methods. Finally, we show competitive results on an audio denoising experiment.
翻译:我们用非线性状态转换的状态空间模型进行近似推论。 在不参数化基因模型的情况下, 我们使用由神经网络设定的局部直线近似近似度参数来应用巴伊西亚更新公式。 同时, 我们使用的最大可能性目标不需要通过不腐败的观测或地面真相潜伏状态进行监督。 通过类似于古典Kalman过滤器和光滑的循环, 优化反向推进。 此外, 使用大致有条件的独立, 我们还可以使用光滑, 而不必对另外的模型进行参数化。 在科学应用中, 域知识可以提供潜在转换图的线性近似值, 我们可以很容易地将其纳入我们的模型中。 这种域知识的使用情况反映在混乱的洛伦茨系统上, 与完全监督和变异的方法相比, 其效果极佳( 尽管我们的模型简单化 ) 。 最后, 我们在音频分解实验中展示了竞争性的结果 。