Our goal in this work is to generate realistic videos given just one initial frame as input. Existing unsupervised approaches to this task do not consider the fact that a video typically shows a 3D environment, and that this should remain coherent from frame to frame even as the camera and objects move. We address this by developing a model that first estimates the latent 3D structure of the scene, including the segmentation of any moving objects. It then predicts future frames by simulating the object and camera dynamics, and rendering the resulting views. Importantly, it is trained end-to-end using only the unsupervised objective of predicting future frames, without any 3D information nor segmentation annotations. Experiments on two challenging datasets of natural videos show that our model can estimate 3D structure and motion segmentation from a single frame, and hence generate plausible and varied predictions.
翻译:我们在这项工作中的目标是制作现实的视频,仅以一个初始框架作为输入。对于这项任务,现有的未经监督的方法并不考虑视频通常显示三维环境这一事实,而且即使在相机和物体移动时,这也应该保持从框架到框架的一致性。我们通过开发一个模型来解决这个问题,该模型首先估计场景潜在的三维结构,包括任何移动物体的分解。然后通过模拟对象和相机动态,以及提供由此产生的观点来预测未来框架。重要的是,它仅使用未经监督的预测未来框架的目标来培训端对端,而没有三维信息或分解说明。关于两个具有挑战性的自然视频数据集的实验显示,我们的模型可以从一个框架来估计三维结构和运动分解,从而产生可信和多样的预测。