Generating a video given the first several static frames is challenging as it anticipates reasonable future frames with temporal coherence. Besides video prediction, the ability to rewind from the last frame or infilling between the head and tail is also crucial, but they have rarely been explored for video completion. Since there could be different outcomes from the hints of just a few frames, a system that can follow natural language to perform video completion may significantly improve controllability. Inspired by this, we introduce a novel task, text-guided video completion (TVC), which requests the model to generate a video from partial frames guided by an instruction. We then propose Multimodal Masked Video Generation (MMVG) to address this TVC task. During training, MMVG discretizes the video frames into visual tokens and masks most of them to perform video completion from any time point. At inference time, a single MMVG model can address all 3 cases of TVC, including video prediction, rewind, and infilling, by applying corresponding masking conditions. We evaluate MMVG in various video scenarios, including egocentric, animation, and gaming. Extensive experimental results indicate that MMVG is effective in generating high-quality visual appearances with text guidance for TVC.
翻译:在最初几个静态框架下制作一个视频,这具有挑战性,因为它预示着合理的未来框架在时间上的一致性。除了视频预测外,从上一个框架倒回或填补头尾和尾部的能力也至关重要,但很少探索以完成视频完成。由于仅几个框架的提示可能产生不同的结果,因此,一个能够遵循自然语言完成视频完成的系统可能大大改进控制性。受此启发,我们引入了一个新颖的任务,即文本引导视频完成(TVC),要求该模型在指令指导下从部分框架生成视频。我们然后提议多式遮盖视频生成(MMMVG)来应对TVC的任务。在培训期间,MMVG将视频框架分解成视觉符号,并掩盖其中的大部分内容,以便从任何时间完成视频完成。在推断时,单一的MMVG模型可以处理TSC的所有3个案例,包括视频预测、回风和填充,通过应用相应的遮罩条件。我们评估各种视频情景中有效的MMVG,包括自我中心、动动画、以及图像图像质量为高版本制作MVG。