Our goal is to predict future video frames given a sequence of input frames. Despite large amounts of video data, this remains a challenging task because of the high-dimensionality of video frames. We address this challenge by proposing the Decompositional Disentangled Predictive Auto-Encoder (DDPAE), a framework that combines structured probabilistic models and deep networks to automatically (i) decompose the high-dimensional video that we aim to predict into components, and (ii) disentangle each component to have low-dimensional temporal dynamics that are easier to predict. Crucially, with an appropriately specified generative model of video frames, our DDPAE is able to learn both the latent decomposition and disentanglement without explicit supervision. For the Moving MNIST dataset, we show that DDPAE is able to recover the underlying components (individual digits) and disentanglement (appearance and location) as we would intuitively do. We further demonstrate that DDPAE can be applied to the Bouncing Balls dataset involving complex interactions between multiple objects to predict the video frame directly from the pixels and recover physical states without explicit supervision.
翻译:我们的目标是预测未来视频框架,给出一个输入框架序列。 尽管视频数据数量众多, 但由于视频框架具有高度的维度, 这仍然是一项艰巨的任务。 我们通过提出分解分解的预测自动电解器(DPAE)来应对这一挑战。 这个框架将结构化的概率模型和深网络结合起来,以便自动(一) 分解高维视频(我们将预测的高维视频分解成组件),和(二) 分解每个组件的低维度时间动态,以较易预测。 我们进一步证明, DDPAE 可以用一个适当指定的视频框架基因模型,在没有明确监督的情况下,既可以学习潜在分解和分解的潜在分解。 对于移动 MNIST 数据集,我们显示 DDPAE 能够像我们直觉的一样恢复基本组件( 个人数字) 和 分解( 出现和位置) 。 我们还证明, DDPAE 可以应用到波星数据设置, 包括多个物体之间的复杂互动, 从而直接预测图像框架从 Pix 和物理直接的恢复。