Post-training methods, especially Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), play an important role in improving large language models' (LLMs) complex reasoning abilities. However, the dominant two-stage pipeline (SFT then RL) suffers from a key inconsistency: SFT enforces rigid imitation that suppresses exploration and induces forgetting, limiting RL's potential for improvements. We address this inefficiency with TRAPO (\textbf{T}rust-\textbf{R}egion \textbf{A}daptive \textbf{P}olicy \textbf{O}ptimization), a hybrid framework that interleaves SFT and RL within each training instance by optimizing SFT loss on expert prefixes and RL loss on the model's own completions, unifying external supervision and self-exploration. To stabilize training, we introduce Trust-Region SFT (TrSFT), which minimizes forward KL divergence inside a trust region but attenuates optimization outside, effectively shifting toward reverse KL and yielding stable, mode-seeking updates favorable for RL. An adaptive prefix-selection mechanism further allocates expert guidance based on measured utility. Experiments on five mathematical reasoning benchmarks show that TRAPO consistently surpasses standard SFT, RL, and SFT-then-RL pipelines, as well as recent state-of-the-art approaches, establishing a strong new paradigm for reasoning-enhanced LLMs.
翻译:后训练方法,特别是监督微调(SFT)和强化学习(RL),在提升大语言模型(LLMs)的复杂推理能力方面发挥着重要作用。然而,主流的两阶段流程(先SFT后RL)存在一个关键的不一致性:SFT强制进行严格的模仿,这抑制了探索并导致遗忘,从而限制了RL的改进潜力。我们通过TRAPO(\textbf{T}rust-\textbf{R}egion \textbf{A}daptive \textbf{P}olicy \textbf{O}ptimization)来解决这一低效问题。TRAPO是一个混合框架,它在每个训练实例中交错进行SFT和RL,通过优化专家前缀上的SFT损失和模型自身补全上的RL损失,将外部监督与自我探索统一起来。为了稳定训练,我们引入了信任区域监督微调(TrSFT),它在信任区域内最小化前向KL散度,但在区域外衰减优化,有效地向反向KL散度偏移,从而产生稳定、有利于RL的寻模更新。一种自适应前缀选择机制进一步根据测量的效用分配专家指导。在五个数学推理基准测试上的实验表明,TRAPO持续超越了标准的SFT、RL、先SFT后RL流程以及近期最先进的方法,为增强推理能力的LLMs建立了一个强大的新范式。