Diffusion models exhibited tremendous progress in image and video generation, exceeding GANs in quality and diversity. However, they are usually trained on very large datasets and are not naturally adapted to manipulate a given input image or video. In this paper we show how this can be resolved by training a diffusion model on a single input image or video. Our image/video-specific diffusion model (SinFusion) learns the appearance and dynamics of the single image or video, while utilizing the conditioning capabilities of diffusion models. It can solve a wide array of image/video-specific manipulation tasks. In particular, our model can learn from few frames the motion and dynamics of a single input video. It can then generate diverse new video samples of the same dynamic scene, extrapolate short videos into long ones (both forward and backward in time) and perform video upsampling. When trained on a single image, our model shows comparable performance and capabilities to previous single-image models in various image manipulation tasks.
翻译:传播模型在图像和视频生成方面取得了巨大进步,在质量和多样性方面超过了GANs,但是,它们通常在非常大的数据集方面受过培训,而且没有自然地适应于操作某一输入图像或视频。在本文中,我们展示了如何通过在单一输入图像或视频上培训一个传播模型来解决这个问题。我们的图像/视频特定传播模型(SinFusion)在利用传播模型的调控能力的同时,学习单一图像或视频的外观和动态。它可以解决一系列广泛的图像/视频特定操作任务。特别是,我们的模型可以从几个框框中学习单一输入视频的动态和动态。然后,它可以产生同一动态场景的多种新的视频样本,将短视频外推到长片(前向和后向),并进行视频上映。在对单个图像进行培训时,我们的模型显示在各种图像操纵任务中与以往的单一图像模型的类似性能和能力。