We consider the problem of state estimation in general state-space models using variational inference. For a generic variational family defined using the same backward decomposition as the actual joint smoothing distribution, we establish for the first time that, under mixing assumptions, the variational approximation of expectations of additive state functionals induces an error which grows at most linearly in the number of observations. This guarantee is consistent with the known upper bounds for the approximation of smoothing distributions using standard Monte Carlo methods. Moreover, we propose an amortized inference framework where a neural network shared over all times steps outputs the parameters of the variational kernels. We also study empirically parametrizations which allow analytical marginalization of the variational distributions, and therefore lead to efficient smoothing algorithms. Significant improvements are made over state-of-the art variational solutions, especially when the generative model depends on a strongly nonlinear and noninjective mixing function.
翻译:我们考虑在一般状态-空间模型中采用变式推断法进行国家估计的问题。对于使用与实际联合平滑分布法相同的后向分解法定义的通用变式家庭,我们第一次确定,在混合假设下,添加状态功能预期的变近值会导致一个错误,这种错误在观测次数中最多以线性方式增长。这一保证与使用标准的蒙特卡洛方法的光滑分布近似值已知的上限一致。此外,我们提议了一个分解推法框架,在这个框架中,一个神经网络共享所有时间段的后向分解分解法输出变式内核的参数。我们还研究实验性对等相近化法,从而使得变式分布法性被分析边缘化,从而导致高效平滑算法。对于艺术的变异性解决方案,特别是在基因模型依赖高度非线性和非导性混合功能的情况下,进行了重大改进。