Generating long, temporally consistent video remains an open challenge in video generation. Primarily due to computational limitations, most prior methods limit themselves to training on a small subset of frames that are then extended to generate longer videos through a sliding window fashion. Although these techniques may produce sharp videos, they have difficulty retaining long-term temporal consistency due to their limited context length. In this work, we present Temporally Consistent Video Transformer (TECO), a vector-quantized latent dynamics video prediction model that learns compressed representations to efficiently condition on long videos of hundreds of frames during both training and generation. We use a MaskGit prior for dynamics prediction which enables both sharper and faster generations compared to prior work. Our experiments show that TECO outperforms SOTA baselines in a variety of video prediction benchmarks ranging from simple mazes in DMLab, large 3D worlds in Minecraft, and complex real-world videos from Kinetics-600. In addition, to better understand the capabilities of video prediction models in modeling temporal consistency, we introduce several challenging video prediction tasks consisting of agents randomly traversing 3D scenes of varying difficulty. This presents a challenging benchmark for video prediction in partially observable environments where a model must understand what parts of the scenes to re-create versus invent depending on its past observations or generations. Generated videos are available at https://wilson1yan.github.io/teco
翻译:生成长长且具有时间一致性的视频仍然是视频制作过程中的公开挑战。 主要是由于计算限制,大多数先前的方法都局限于在一小组框架上的培训,然后通过滑动窗口式的窗口方式扩展,以产生较长的视频。 虽然这些技术可能会产生锐利的视频,但由于其背景长度有限,它们难以保持长期的时间一致性。 在这项工作中,我们展示了一个矢量量化的潜在动态视频预测模型(TeCO),该模型学习压缩表达方式,以便有效地以数百个框架的长视频为条件。我们使用之前的 MaskGit 来进行动态预测,这样可以比以往的工作更清晰和更快的几代人。 我们的实验显示, TECO 超越了SOTA基线, 其视频预测基准范围从DLab、Minecrab的大3D世界、 Kiniticals-600的复杂真实世界视频。 此外,为了更好地了解视频预测模型在模拟时间一致性方面的能力,我们引入了由3D型随机穿插的图像图像/不同年龄的图像。 这个实验显示的是具有挑战性的图像的模型。