While diffusion language models (DLMs) enable fine-grained refinement, their practical controllability remains fragile. We identify and formally characterize a central failure mode called update forgetting, in which uniform and context agnostic updates induce token level fluctuations across timesteps, erasing earlier semantic edits and disrupting the cumulative refinement process, thereby degrading fluency and coherence. As this failure originates in uniform and context agnostic updates, effective control demands explicit token ordering. We propose Token Timestep Allocation (TTA), which realizes soft and semantic token ordering via per token timestep schedules: critical tokens are frozen early, while uncertain tokens receive continued refinement. This timestep based ordering can be instantiated as either a fixed policy or an adaptive policy driven by task signals, thereby supporting a broad spectrum of refinement strategies. Because it operates purely at inference time, it applies uniformly across various DLMs and naturally extends to diverse supervision sources. Empirically, TTA improves controllability and fluency: on sentiment control, it yields more than 20 percent higher accuracy and nearly halves perplexity using less than one fifth the steps; in detoxification, it lowers maximum toxicity (12.2 versus 14.5) and perplexity (26.0 versus 32.0). Together, these results demonstrate that softened ordering via timestep allocation is the critical lever for mitigating update forgetting and achieving stable and controllable diffusion text generation.
翻译:尽管扩散语言模型(DLMs)能够实现细粒度的文本精炼,但其实际可控性仍显脆弱。我们识别并形式化描述了一种称为“更新遗忘”的核心失效模式:在这种模式下,均匀且上下文无关的更新会引发跨时间步的令牌级波动,抹除先前的语义编辑并破坏累积精炼过程,从而降低文本的流畅性与连贯性。由于此失效源于均匀且上下文无关的更新,有效的控制需要明确的令牌排序。我们提出令牌时间步分配(TTA)方法,该方法通过为每个令牌分配时间步调度来实现软性的、基于语义的令牌排序:关键令牌被早期冻结,而不确定令牌则持续接受精炼。这种基于时间步的排序可实例化为固定策略,或由任务信号驱动的自适应策略,从而支持广泛的精炼策略。由于TTA仅在推理阶段运行,它可统一应用于各类DLMs,并自然地扩展到多种监督信号源。实证结果表明,TTA显著提升了可控性与流畅性:在情感控制任务中,其准确率提升超过20%,困惑度降低近半,且所用时间步数不足原方法的五分之一;在去毒性任务中,最大毒性分数(12.2对比14.5)与困惑度(26.0对比32.0)均显著降低。综上,这些结果证明,通过时间步分配实现的软化排序是缓解更新遗忘、实现稳定可控扩散文本生成的关键机制。