While diffusion models achieve state-of-the-art generation quality, they still suffer from computationally expensive sampling. Recent works address this issue with gradient-based optimization methods that distill a few-step ODE diffusion solver from the full sampling process, reducing the number of function evaluations from dozens to just a few. However, these approaches often rely on intricate training techniques and do not explicitly focus on preserving fine-grained details. In this paper, we introduce the Generalized Solver: a simple parameterization of the ODE sampler that does not require additional training tricks and improves quality over existing approaches. We further combine the original distillation loss with adversarial training, which mitigates artifacts and enhances detail fidelity. We call the resulting method the Generalized Adversarial Solver and demonstrate its superior performance compared to existing solver training methods under similar resource constraints. Code is available at https://github.com/3145tttt/GAS.
翻译:尽管扩散模型在生成质量上达到了最先进水平,但其采样过程仍受计算成本高昂的困扰。近期研究通过基于梯度的优化方法解决此问题,从完整采样过程中蒸馏出少量步数的常微分方程扩散求解器,从而将函数评估次数从数十次减少到仅数次。然而,这些方法通常依赖于复杂的训练技巧,且未明确关注细粒度细节的保持。本文提出广义求解器:一种无需额外训练技巧的常微分方程采样器参数化方法,其质量优于现有方法。我们进一步将原始蒸馏损失与对抗训练相结合,从而减少伪影并提升细节保真度。我们将所得方法称为广义对抗求解器,并在相似资源约束下证明了其相对于现有求解器训练方法的优越性能。代码发布于 https://github.com/3145tttt/GAS。