Despite the remarkable progress in deep generative models, synthesizing high-resolution and temporally coherent videos still remains a challenge due to their high-dimensionality and complex temporal dynamics along with large spatial variations. Recent works on diffusion models have shown their potential to solve this challenge, yet they suffer from severe computation- and memory-inefficiency that limit the scalability. To handle this issue, we propose a novel generative model for videos, coined projected latent video diffusion models (PVDM), a probabilistic diffusion model which learns a video distribution in a low-dimensional latent space and thus can be efficiently trained with high-resolution videos under limited resources. Specifically, PVDM is composed of two components: (a) an autoencoder that projects a given video as 2D-shaped latent vectors that factorize the complex cubic structure of video pixels and (b) a diffusion model architecture specialized for our new factorized latent space and the training/sampling procedure to synthesize videos of arbitrary length with a single model. Experiments on popular video generation datasets demonstrate the superiority of PVDM compared with previous video synthesis methods; e.g., PVDM obtains the FVD score of 639.7 on the UCF-101 long video (128 frames) generation benchmark, which improves 1773.4 of the prior state-of-the-art.
翻译:尽管在深层基因化模型方面取得了显著进展,但合成高分辨率和时间一致性高分辨率和高清晰度视频仍是一个挑战,因为其高度性和复杂的时间动态以及巨大的空间差异。最近关于传播模型的工程显示其解决这一挑战的潜力,然而,这些模型的计算和记忆效率严重不足,限制了可缩放性。为处理这一问题,我们提议了一个新型的视频基因化模型,在低维潜潜伏空间学习视频传播的概率性传播模型(PVDM),该模型在有限的资源下,可以有效地用高清晰度视频培训。具体地说,PVDM由两个部分组成:(a) 自动编码器,将给定视频的2D形潜在矢量投射成2D形潜在矢量,将视频像素的复杂立方结构成因子。和(b) 用于我们新的因子化潜伏层潜伏空间的传播模型以及用单一模型合成任意长度视频视频的训练/取样程序。关于广度视频生成数据集的实验显示PVDDD与以前的视频合成方法相比具有优势。例如178-DM在177年的图像模型上取得了第177次的版本。