In this work, we look at Score-based generative models (also called diffusion generative models) from a geometric perspective. From a new view point, we prove that both the forward and backward process of adding noise and generating from noise are Wasserstein gradient flow in the space of probability measures. We are the first to prove this connection. Our understanding of Score-based (and Diffusion) generative models have matured and become more complete by drawing ideas from different fields like Bayesian inference, control theory, stochastic differential equation and Schrodinger bridge. However, many open questions and challenges remain. One problem, for example, is how to decrease the sampling time? We demonstrate that looking from geometric perspective enables us to answer many of these questions and provide new interpretations to some known results. Furthermore, geometric perspective enables us to devise an intuitive geometric solution to the problem of faster sampling. By augmenting traditional score-based generative models with a projection step, we show that we can generate high quality images with significantly fewer sampling-steps.
翻译:在这项工作中,我们从几何角度审视基于分数的基因化模型(也称为扩散基因化模型)。从一个新的观点来看,我们证明增加噪音和噪音生成的前向和后向过程都是概率测量空间中的瓦塞斯坦梯度流。我们首先证明了这一联系。我们对基于分数的(和传播)基因化模型的理解已经成熟,并通过从不同领域,例如巴伊西亚推理、控制理论、随机差异方程式和施罗德因格桥等不同领域汲取思想而变得更加完整。然而,许多公开的问题和挑战依然存在。例如,一个问题是如何减少取样时间?我们从几何角度的观察让我们能够回答许多问题,并为一些已知结果提供新的解释。此外,几何角度的观察使我们能够设计出一个对更快取样问题具有直观性的几何学解决办法。我们用一个预测步骤来扩大传统的分数制模型,表明我们可以以少得多的采样步骤产生高质量的图像。