Diffusion models have recently emerged as a powerful framework for generative modeling. They consist of a forward process that perturbs input data with Gaussian white noise and a reverse process that learns a score function to generate samples by denoising. Despite their tremendous success, they are mostly formulated on finite-dimensional spaces, e.g. Euclidean, limiting their applications to many domains where the data has a functional form such as in scientific computing and 3D geometric data analysis. In this work, we introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space. In DDOs, the forward process perturbs input functions gradually using a Gaussian process. The generative process is formulated by integrating a function-valued Langevin dynamic. Our approach requires an appropriate notion of the score for the perturbed data distribution, which we obtain by generalizing denoising score matching to function spaces that can be infinite-dimensional. We show that the corresponding discretized algorithm generates accurate samples at a fixed cost that is independent of the data resolution. We theoretically and numerically verify the applicability of our approach on a set of problems, including generating solutions to the Navier-Stokes equation viewed as the push-forward distribution of forcings from a Gaussian Random Field (GRF).
翻译:传播模型最近成为基因模型的强大框架。 它们包括一个前进进程, 以高山白色噪音和反向进程干扰输入数据, 以高山白色噪音和反向进程干扰输入数据, 学习分数函数, 以便通过拆分生成样本。 尽管它们取得了巨大成功, 但它们大多是在有限的维空间上开发的, 例如, Euclidean, 将其应用限制在数据具有功能形式的许多领域, 如科学计算和3D几何数据分析。 在这项工作中, 我们引入了一个数学上严格的框架, 叫做 Denoising Difmission Actors (DDDDOs), 用于培训功能空间的传播模型。 在DDDOs 中, 前进进程会利用高山进程逐渐学习一个分数函数生成样本。 基因化进程是通过集功能价值Langevin动态来开发的。 我们的方法需要有一个适当的分数概念, 通过将分数分数匹配到功能空间, 可能是无限的。 我们显示相应的离散算算算算算算法在固定字段中产生精确的样本, 而不是数据分辨率分辨率解法的解决方案。 我们从GVILU 的分算法方法, 。 我们从生成了我们的方程式的分数分配法, 的分算法的分算法的分算法的分法,, 以制为一种压法的分法的分算。