Standard diffusion models involve an image transform -- adding Gaussian noise -- and an image restoration operator that inverts this degradation. We observe that the generative behavior of diffusion models is not strongly dependent on the choice of image degradation, and in fact an entire family of generative models can be constructed by varying this choice. Even when using completely deterministic degradations (e.g., blur, masking, and more), the training and test-time update rules that underlie diffusion models can be easily generalized to create generative models. The success of these fully deterministic models calls into question the community's understanding of diffusion models, which relies on noise in either gradient Langevin dynamics or variational inference, and paves the way for generalized diffusion models that invert arbitrary processes. Our code is available at https://github.com/arpitbansal297/Cold-Diffusion-Models
翻译:标准扩散模型涉及图像转换 -- -- 增加高斯噪音 -- -- 和图像恢复操作器,从而扭转这种退化。我们观察到,扩散模型的基因行为并不在很大程度上取决于图像退化的选择,事实上,可以通过不同的选择来构建整个基因模型的全套。即使使用完全确定性退化(例如模糊、遮盖等),作为扩散模型基础的培训和测试时间更新规则也可以很容易地普及,以创建基因化模型。这些完全确定性模型的成功使人们质疑社区对扩散模型的理解,这种模型依赖梯度朗埃文动力或变异推断中的噪音,并为普遍传播模型铺平了道路,而这种扩散模型是任意的。我们的代码可以在https://github.com/arpitbansal297/Cold-Difluf-Models上查阅。