Despite the remarkable progress in deep generative models, synthesizing high-resolution and temporally coherent videos still remains a challenge due to their high-dimensionality and complex temporal dynamics along with large spatial variations. Recent works on diffusion models have shown their potential to solve this challenge, yet they suffer from severe computation- and memory-inefficiency that limit the scalability. To handle this issue, we propose a novel generative model for videos, coined projected latent video diffusion models (PVDM), a probabilistic diffusion model which learns a video distribution in a low-dimensional latent space and thus can be efficiently trained with high-resolution videos under limited resources. Specifically, PVDM is composed of two components: (a) an autoencoder that projects a given video as 2D-shaped latent vectors that factorize the complex cubic structure of video pixels and (b) a diffusion model architecture specialized for our new factorized latent space and the training/sampling procedure to synthesize videos of arbitrary length with a single model. Experiments on popular video generation datasets demonstrate the superiority of PVDM compared with previous video synthesis methods; e.g., PVDM obtains the FVD score of 639.7 on the UCF-101 long video (128 frames) generation benchmark, which improves 1773.4 of the prior state-of-the-art.
翻译:尽管深度生成模型取得了显著进展,然而由于高维度、复杂的时间动态以及大的空间变化,合成高分辨率和时间上连续的视频仍然是一个挑战。最近文献中的探究表明扩散模型有解决这一挑战的潜力,但是它们受到计算和内存效率问题的限制,这限制了其可扩展性。为了解决这个问题,我们提出了一种新的视频生成模型,命名为投影潜视频扩散模型(PVDM),它是一种在低维潜在空间中学习视频分布的概率扩散模型,因此可以在有限的资源下高效地训练高分辨率视频。具体而言,PVDM由两个组件组成:(a)自编码器,将给定视频投影为2D形状的潜向量,因子化了视频像素的复杂立方体结构;(b)扩散模型结构,特化于我们新的分解隐空间和训练/采样过程,以使用单个模型生成任意长度的视频。在受欢迎的视频生成数据集上进行的实验表明,与之前的视频生成方法相比,PVDM具有更高的优越性;例如,在UCF-101长视频(128帧)生成基准测试中,PVDM的FVD分数为639.7,这比之前的最先进水平提高了1773.4。