We present rectified flow, a surprisingly simple approach to learning (neural) ordinary differential equation (ODE) models to transport between two empirically observed distributions \pi_0 and \pi_1, hence providing a unified solution to generative modeling and domain transfer, among various other tasks involving distribution transport. The idea of rectified flow is to learn the ODE to follow the straight paths connecting the points drawn from \pi_0 and \pi_1 as much as possible. This is achieved by solving a straightforward nonlinear least squares optimization problem, which can be easily scaled to large models without introducing extra parameters beyond standard supervised learning. The straight paths are special and preferred because they are the shortest paths between two points, and can be simulated exactly without time discretization and hence yield computationally efficient models. We show that the procedure of learning a rectified flow from data, called rectification, turns an arbitrary coupling of \pi_0 and \pi_1 to a new deterministic coupling with provably non-increasing convex transport costs. In addition, recursively applying rectification allows us to obtain a sequence of flows with increasingly straight paths, which can be simulated accurately with coarse time discretization in the inference phase. In empirical studies, we show that rectified flow performs superbly on image generation, image-to-image translation, and domain adaptation. In particular, on image generation and translation, our method yields nearly straight flows that give high quality results even with a single Euler discretization step.
翻译:我们提出了纠正流程, 一种令人惊讶的简单学习( 神经)普通差异方程式( ODE) 模式, 在两种经经验观察的分布分布 \ pi_ 0 和\ pi_ 1 之间进行传输, 从而在涉及分配运输的其他任务中为基因建模和域转移提供统一的解决办法。 校正流程的理念是学习ODE 尽可能遵循连接\ pi_ 0 和\ pi_ 1 点的直线路径。 这是通过解决一个直截了当的非线性最小方程式优化问题实现的。 这一问题可以很容易地在大型模型中进行缩放, 而不引入标准监督步骤学习以外的额外参数。 直线路径是特殊的, 因为它是两个点之间的最短路径, 并且可以完全模拟基因建模模型模型和生成效率模型。 我们展示了从数据中校正流的程序, 称之为校正, 将\ pi_ 0 和\ pi_ 1 任意合并到一个新的确定性组合组合, 直径递增 运成本 。 此外, 重新校正的校正的翻校正, 和 直的图像的生成 。