Diffusion Language Models (DLMs) have shown strong potential for text generation and are becoming a competitive alternative to autoregressive models. The denoising strategy plays an important role in determining the quality of their outputs. Mainstream denoising strategies include Standard Diffusion and BlockDiffusion. Standard Diffusion performs global denoising without restricting the update range, often finalizing incomplete context and causing premature end-of-sequence predictions. BlockDiffusion updates fixed-size blocks in a preset order, but its rigid structure can break apart coherent semantic units and disrupt reasoning. We present WavefrontDiffusion, a dynamic decoding approach that expands a wavefront of active tokens outward from finalized positions. This adaptive process follows the natural flow of semantic structure while keeping computational cost equal to block-based methods. Across four benchmarks in reasoning and code generation, WavefrontDiffusion achieves state-of-the-art performance while producing outputs with higher semantic fidelity, showing the value of adaptive scheduling for more coherent and efficient generation.
翻译:扩散语言模型在文本生成方面展现出强大潜力,正成为自回归模型的有力竞争替代方案。去噪策略对其输出质量具有决定性影响。主流去噪策略包括标准扩散与块扩散:标准扩散执行全局去噪而不限制更新范围,常导致未完整语境过早固化并引发序列提前终止预测;块扩散按预设顺序更新固定尺寸块,但其刚性结构可能割裂连贯语义单元并干扰推理过程。本文提出WavefrontDiffusion——一种动态解码方法,通过从已确定位置向外扩展活跃令牌波前进行自适应更新。该过程遵循语义结构的自然流动,同时保持与基于块的方法相当的计算成本。在推理与代码生成领域的四个基准测试中,WavefrontDiffusion均取得最先进性能,且生成结果具有更高语义保真度,证明了自适应调度对实现更连贯高效生成的价值。