Higher-order ODE solvers have become a standard tool for accelerating diffusion probabilistic model (DPM) sampling, motivating the widespread view that first-order methods are inherently slower and that increasing discretization order is the primary path to faster generation. This paper challenges this belief and revisits acceleration from a complementary angle: beyond solver order, the placement of DPM evaluations along the reverse-time dynamics can substantially affect sampling accuracy in the low-neural function evaluation (NFE) regime. We propose a novel training-free, first-order sampler whose leading discretization error has the opposite sign to that of DDIM. Algorithmically, the method approximates the forward-value evaluation via a cheap one-step lookahead predictor. We provide theoretical guarantees showing that the resulting sampler provably approximates the ideal forward-value trajectory while retaining first-order convergence. Empirically, across standard image generation benchmarks (CIFAR-10, ImageNet, FFHQ, and LSUN), the proposed sampler consistently improves sample quality under the same NFE budget and can be competitive with, and sometimes outperform, state-of-the-art higher-order samplers. Overall, the results suggest that the placement of DPM evaluations provides an additional and largely independent design angle for accelerating diffusion sampling.
翻译:高阶ODE求解器已成为加速扩散概率模型(DPM)采样的标准工具,这促使了一种普遍观点:一阶方法本质上更慢,而提高离散化阶数是实现更快生成的主要途径。本文挑战了这一观点,并从互补角度重新审视加速问题:除了求解器阶数之外,DPM评估点在反向时间动力学中的布置方式会显著影响低神经函数评估(NFE)区域内的采样精度。我们提出了一种无需训练的一阶采样器,其主导离散化误差与DDIM的误差符号相反。在算法层面,该方法通过廉价的一步前瞻预测器来近似前向值评估。我们提供了理论保证,证明所得采样器在保持一阶收敛性的同时,能够可证明地逼近理想的前向值轨迹。实证结果表明,在标准图像生成基准测试(CIFAR-10、ImageNet、FFHQ和LSUN)中,所提出的采样器在相同NFE预算下持续提升样本质量,并且能够与最先进的高阶采样器竞争,有时甚至表现更优。总体而言,这些结果表明DPM评估点的布置为加速扩散采样提供了一个额外且基本独立的设计维度。