Despite having been studied to a great extent, the task of conditional generation of sequences of frames, or videos, remains extremely challenging. It is a common belief that a key step towards solving this task resides in modelling accurately both spatial and temporal information in video signals. A promising direction to do so has been to learn latent variable models that predict the future in latent space and project back to pixels, as suggested in recent literature. Following this line of work and building on top of a family of models introduced in prior work, Neural ODE, we investigate an approach that models time-continuous dynamics over a continuous latent space with a differential equation with respect to time. The intuition behind this approach is that these trajectories in latent space could then be extrapolated to generate video frames beyond the time steps for which the model is trained. We show that our approach yields promising results in the task of future frame prediction on the Moving MNIST dataset with 1 and 2 digits.
翻译:尽管已经进行了大量研究,但有条件生成框架或视频序列的任务仍然极具挑战性,人们普遍认为,完成这项任务的一个关键步骤在于对视频信号中的空间和时间信息进行准确的建模。一个有希望的方向是学习潜在的可变模型,如最近的文献所建议的那样,预测潜在空间的未来,并投射到像素中。在这项工作之后,在先前工作中引入的模型系列Neoral ODE之上,我们调查一种方法,即模拟一个连续潜伏空间上的时间性动态,同时对时间进行差异方程式。这种方法背后的直觉是,这些潜伏空间的轨迹随后可以外推,在模型所培训的时段之外生成视频框架。我们表明,我们的方法在用1位和2位数字对移动MNIST数据集进行未来框架预测的任务中产生了有希望的结果。