Score-based diffusion models have captured widespread attention and funded fast progress of recent vision generative tasks. In this paper, we focus on diffusion model backbone which has been much neglected before. We systematically explore vision Transformers as diffusion learners for various generative tasks. With our improvements the performance of vanilla ViT-based backbone (IU-ViT) is boosted to be on par with traditional U-Net-based methods. We further provide a hypothesis on the implication of disentangling the generative backbone as an encoder-decoder structure and show proof-of-concept experiments verifying the effectiveness of a stronger encoder for generative tasks with ASymmetriC ENcoder Decoder (ASCEND). Our improvements achieve competitive results on CIFAR-10, CelebA, LSUN, CUB Bird and large-resolution text-to-image tasks. To the best of our knowledge, we are the first to successfully train a single diffusion model on text-to-image task beyond 64x64 resolution. We hope this will motivate people to rethink the modeling choices and the training pipelines for diffusion-based generative models.
翻译:基于分数的传播模式已引起广泛的关注,并为最近的视觉变异任务取得快速进展提供了资金。在本文件中,我们侧重于以前被严重忽视的传播模型主干。我们系统地探索作为各种基因任务传播学习者的视觉变异器。随着我们改进了Vanilla ViT主干(IU-VIT)的性能,使之与传统的基于U-Net的方法(IU-VIT)相提并论。我们进一步提供了一种假设,说明将基因变异骨干脱钩作为编码解码结构的影响,并展示了概念验证实验,以核实AsymetriC ENcoder 解密器(ASCENDER)的基因化任务更强大的编码器的有效性。我们的改进在CIFAR-10、CelibA、LSUN、CUB Bird和大分辨率文本到图像任务上取得了竞争性的成果。我们最了解的是,我们首先成功地培训了一个关于文本到图像任务超过64年分辨率的单一传播模式。我们希望这将激励人们重新思考基于传播基因模型的选择和培训管道。