Employing a forward Markov diffusion chain to gradually map the data to a noise distribution, diffusion probabilistic models learn how to generate the data by inferring a reverse Markov diffusion chain to invert the forward diffusion process. To achieve competitive data generation performance, they demand a long diffusion chain that makes them computationally intensive in not only training but also generation. To significantly improve the computation efficiency, we propose to truncate the forward diffusion chain by abolishing the requirement of diffusing the data to random noise. Consequently, we start the inverse diffusion chain from an implicit generative distribution, rather than random noise, and learn its parameters by matching it to the distribution of the data corrupted by the truncated forward diffusion chain. Experimental results show our truncated diffusion probabilistic models provide consistent improvements over the non-truncated ones in terms of the generation performance and the number of required inverse diffusion steps.
翻译:使用远方的 Markov 扩散链将数据绘制成噪音分布图, 扩散概率模型通过推导反向的 Markov 扩散链学习如何生成数据, 从而改变前方扩散过程。 为了实现有竞争力的数据生成性能, 他们需要一个长长的传播链, 使其不仅在培训过程中, 而且在生成过程中, 都能进行计算。 为了显著提高计算效率, 我们提议通过取消将数据转换成随机噪音的要求, 跳过前方扩散链。 因此, 我们从隐含的基因分布, 而不是随机的噪音开始逆向扩散链, 并学习其参数, 将它与被短速前方扩散链腐蚀的数据分布相匹配。 实验结果显示, 我们的快速扩散概率模型在生成性效果和所需的反向扩散步骤数量方面, 与非脱线的传播性模型相比, 提供了一致的改进。