Generating a video given the first several static frames is challenging as it anticipates reasonable future frames with temporal coherence. Besides video prediction, the ability to rewind from the last frame or infilling between the head and tail is also crucial, but they have rarely been explored for video completion. Since there could be different outcomes from the hints of just a few frames, a system that can follow natural language to perform video completion may significantly improve controllability. Inspired by this, we introduce a novel task, text-guided video completion (TVC), which requests the model to generate a video from partial frames guided by an instruction. We then propose Multimodal Masked Video Generation (MMVG) to address this TVC task. During training, MMVG discretizes the video frames into visual tokens and masks most of them to perform video completion from any time point. At inference time, a single MMVG model can address all 3 cases of TVC, including video prediction, rewind, and infilling, by applying corresponding masking conditions. We evaluate MMVG in various video scenarios, including egocentric, animation, and gaming. Extensive experimental results indicate that MMVG is effective in generating high-quality visual appearances with text guidance for TVC.
翻译:给定前几帧静态图像生成视频具有挑战性,因为它需要预测具有时间一致性的合理的未来帧。除了视频预测之外,从最后一帧到头部或从头部到尾部的重放和补全也是至关重要的,但这些方面很少被探索。由于基于仅几个帧的提示可能会有不同的结果,因此能够按照自然语言执行视频完成的系统可以显著提高可控性。受此启发,我们引入了一项新的任务,即文本引导的视频完成(TVC),其要求模型以指令为指导从部分帧生成视频。然后,我们提出了多模态掩蔽视频生成(MMVG)来解决这个TVC任务。在训练期间,MMVG将视频帧离散为视觉标记,并屏蔽大多数标记来执行来自任何时间点的视频完成。在推理时,单个MMVG模型可以通过应用相应的掩蔽条件来解决TVC的所有3种情况,包括视频预测,重放和补全。我们在各种视频场景中评估了MMVG,包括自我中心、动画和游戏。广泛的实验结果表明,MMVG在文本引导下为TVC生成高质量的视觉外观是有效的。