Self-supervised reinforcement learning (RL) presents a promising approach for enhancing the reasoning capabilities of Large Language Models (LLMs) without reliance on expensive human-annotated data. However, we find that existing methods suffer from a critical failure mode under long-horizon training: a "policy collapse" where performance precipitously degrades. We diagnose this instability and demonstrate that simply scaling the number of rollouts -- a common strategy to improve performance -- only delays, but does not prevent, this collapse. To counteract this instability, we first introduce M-GRPO (Momentum-Anchored Group Relative Policy Optimization), a framework that leverages a slowly evolving momentum model to provide a stable training target. In addition, we identify that this process is often accompanied by a rapid collapse in policy entropy, resulting in a prematurely confident and suboptimal policy. To specifically address this issue, we propose a second contribution: an adaptive filtering method based on the interquartile range (IQR) that dynamically prunes low-entropy trajectories, preserving essential policy diversity. Our extensive experiments on multiple reasoning benchmarks demonstrate that M-GRPO stabilizes the training process while the IQR filter prevents premature convergence. The combination of these two innovations leads to superior training stability and state-of-the-art performance.
翻译:自监督强化学习为提升大语言模型的推理能力提供了一种无需依赖昂贵人工标注数据的有效途径。然而,我们发现现有方法在长周期训练中存在一个关键失效模式:即“策略崩溃”现象,导致性能急剧下降。我们诊断了这种不稳定性,并证明仅通过增加采样轨迹数量——一种常用的性能提升策略——只能推迟但无法避免这种崩溃。为应对这一不稳定性,我们首先提出了M-GRPO(动量锚定组相对策略优化)框架,该框架利用缓慢演化的动量模型提供稳定的训练目标。此外,我们观察到该过程常伴随策略熵值的快速崩溃,导致策略过早自信并陷入次优状态。针对此问题,我们提出第二项贡献:基于四分位距的自适应过滤方法,动态剪枝低熵轨迹以保持必要的策略多样性。我们在多个推理基准测试上的大量实验表明,M-GRPO能有效稳定训练过程,而IQR过滤器可防止过早收敛。这两项创新的结合实现了卓越的训练稳定性与最先进的性能表现。