Sequential VAEs have been successfully considered for many high-dimensional time series modelling problems, with many variant models relying on discrete-time mechanisms such as recurrent neural networks (RNNs). On the other hand, continuous-time methods have recently gained attraction, especially in the context of irregularly-sampled time series, where they can better handle the data than discrete-time methods. One such class are Gaussian process variational autoencoders (GPVAEs), where the VAE prior is set as a Gaussian process (GP). However, a major limitation of GPVAEs is that it inherits the cubic computational cost as GPs, making it unattractive to practioners. In this work, we leverage the equivalent discrete state space representation of Markovian GPs to enable linear time GPVAE training via Kalman filtering and smoothing. We show on a variety of high-dimensional temporal and spatiotemporal tasks that our method performs favourably compared to existing approaches whilst being computationally highly scalable.
翻译:许多高维时间序列建模问题都成功地考虑了序列VAEs,其中有许多模型依赖离散时间机制,如经常神经网络(RNN),而连续时间方法最近越来越引人注意,特别是在不规则抽样的时间序列中,它们比离散时间序列能够更好地处理数据。其中一类是高斯过程变异自动调整器(GPVAEs),以前VAE被设为高斯进程(GP)。然而,GPVAEs的一个主要限制是,它继承了作为GPs的立方计算成本,使Practers没有吸引力。在这项工作中,我们利用Markovian GPs的等离散状态空间代表,通过Kalman过滤和平滑,使GPVAE的在线时间培训得以进行。我们展示了多种高维时间和微时空任务,我们的方法与现有方法相比,在可计算高度可扩展的方法上表现良好。