Recent ODE/SDE-based generative models, such as diffusion models, rectified flows, and flow matching, define a generative process as a time reversal of a fixed forward process. Even though these models show impressive performance on large-scale datasets, numerical simulation requires multiple evaluations of a neural network, leading to a slow sampling speed. We attribute the reason to the high curvature of the learned generative trajectories, as it is directly related to the truncation error of a numerical solver. Based on the relationship between the forward process and the curvature, here we present an efficient method of training the forward process to minimize the curvature of generative trajectories without any ODE/SDE simulation. Experiments show that our method achieves a lower curvature than previous models and, therefore, decreased sampling costs while maintaining competitive performance. Code is available at https://github.com/sangyun884/fast-ode.
翻译:最近基于 ODE/SDE 的基因化模型,如扩散模型、纠正流流和流量匹配等,将基因化过程定义为固定前方过程的时间逆转。尽管这些模型在大型数据集上表现出令人印象深刻的性能,但数字模拟要求对神经网络进行多重评估,导致取样速度缓慢。我们把原因归结于所学基因轨迹的高曲线,因为它与数字解答器的脱轨错误直接相关。根据前方过程和曲线之间的关系,我们在这里提出了一个有效的前方过程培训方法,以尽量减少基因轨迹的曲线,而无需任何ODE/SDE模拟。实验表明,我们的方法比以前模型的曲线要低,因此,在保持竞争性性能的同时降低了取样成本。代码可在https://github.com/sangyun884/fast-odeode查阅。