We perform approximate inference in state-space models with nonlinear state transitions. Without parameterizing a generative model, we apply Bayesian update formulas using a local linearity approximation parameterized by neural networks. This comes accompanied by a maximum likelihood objective that requires no supervision via uncorrupt observations or ground truth latent states. The optimization backpropagates through a recursion similar to the classical Kalman filter and smoother. Additionally, using an approximate conditional independence, we can perform smoothing without having to parameterize a separate model. In scientific applications, domain knowledge can give a linear approximation of the latent transition maps, which we can easily incorporate into our model. Usage of such domain knowledge is reflected in excellent results (despite our model's simplicity) on the chaotic Lorenz system compared to fully supervised and variational inference methods. Finally, we show competitive results on an audio denoising experiment.
翻译:我们用非线性状态转换的状态空间模型进行近似推论。 在不参数化基因模型的情况下, 我们使用由神经网络设定的局部线性近似参数来应用贝耶斯更新公式。 伴之以一个最大可能性目标, 不需要通过无干扰的观测或地面的真相潜伏状态来监督。 优化后推法通过类似于古典卡尔曼过滤器和光滑器的循环法进行。 此外, 使用近似有条件的独立, 我们可以用大致有条件的独立来进行平滑工作, 而不必对一个单独的模型进行参数化。 在科学应用中, 域知识可以提供潜在过渡图的线性近似值, 我们可以很容易地将其纳入我们的模型。 这种域知识的使用情况反映在混乱的洛伦兹系统上, 与完全受监督和变异的方法相比, 我们展示了音频脱色实验的竞争性结果 。