In this paper, we propose NUWA-XL, a novel Diffusion over Diffusion architecture for eXtremely Long video generation. Most current work generates long videos segment by segment sequentially, which normally leads to the gap between training on short videos and inferring long videos, and the sequential generation is inefficient. Instead, our approach adopts a ``coarse-to-fine'' process, in which the video can be generated in parallel at the same granularity. A global diffusion model is applied to generate the keyframes across the entire time range, and then local diffusion models recursively fill in the content between nearby frames. This simple yet effective strategy allows us to directly train on long videos (3376 frames) to reduce the training-inference gap, and makes it possible to generate all segments in parallel. To evaluate our model, we build FlintstonesHD dataset, a new benchmark for long video generation. Experiments show that our model not only generates high-quality long videos with both global and local coherence, but also decreases the average inference time from 7.55min to 26s (by 94.26\%) at the same hardware setting when generating 1024 frames. The homepage link is \url{https://msra-nuwa.azurewebsites.net/}
翻译:在本文中,我们提出了一种新颖的扩散生成极长视频的架构NUWA-XL。目前的大部分工作都是按照片段顺序逐个生成长视频,这通常导致训练短视频和推断长视频之间存在差距,并且顺序生成效率低下。相反,我们的方法采用了一个“由粗到细”的过程,其中视频可以以相同粒度并行生成。一个全局扩散模型用于生成整个时间范围内的关键帧,然后局部扩散模型递归地填充邻近帧之间的内容。这个简单而有效的策略使我们能够直接训练长视频(3376帧)以减少训练-推断差距,并使生成所有片段成为可能。为评估我们的模型,我们构建了FlintstonesHD数据集,这是一个新的用于长视频生成的基准测试。实验表明,我们的模型不仅能够生成具有全局和局部一致性的高质量长视频,而且在生成1024帧时,平均推断时间从7.55分钟降至26秒(降低了94.26%)在相同的硬件设置下。主页链接为\url{https://msra-nuwa.azurewebsites.net/}。