Deep generative models are widely used for modelling high-dimensional time series, such as video animations, audio and climate data. Sequential variational autoencoders have been successfully considered for many applications, with many variant models relying on discrete-time methods and recurrent neural networks (RNNs). On the other hand, continuous-time methods have recently gained attraction, especially in the context of irregularly-sampled time series, where they can better handle the data than discrete-time methods. One such class are Gaussian process variational autoencoders (GPVAEs), where the VAE prior is set as a Gaussian process (GPs), allowing inductive biases to be explicitly encoded via the kernel function and interpretability of the latent space. However, a major limitation of GPVAEs is that it inherits the same cubic computational cost as GPs. In this work, we leverage the equivalent discrete state space representation of Markovian GPs to enable a linear-time GP solver via Kalman filtering and smoothing. We show via corrupt and missing frames tasks that our method performs favourably, especially on the latter where it outperforms RNN-based models.
翻译:深基因模型被广泛用于模拟高维时间序列,如视频动画、音频和气候数据等。对于许多应用,都成功地考虑了序列变异自动代数,许多不同的模型依赖离散时间方法和经常神经网络。另一方面,连续时间方法最近越来越引人注意,特别是在非常规抽样时间序列中,它们比离散时间序列能够更好地处理数据。其中一个类是高萨进程变异自动代数(GPVAEs),VAE前一个被设置为高频进程,允许通过隐蔽空间的内核功能和可解释性对隐含偏差进行明确的编码。然而,GPVAE的主要限制是,它继承了与GPs相同的立方计算成本。在这项工作中,我们利用了Markovian GPs的等离散空间代表,以便通过 Kalman 过滤和滑动使前一个直线时间GP解算器(GPs) 。我们通过腐蚀和缺失的模型展示了我们的方法,特别是RNPF 。