Reinforcement Learning (RL) has become a key driver for enhancing the long chain-of-thought (CoT) reasoning capabilities of Large Language Models (LLMs). However, prevalent methods like GRPO often fail when task difficulty exceeds the model's capacity, leading to reward sparsity and inefficient training. While prior work attempts to mitigate this using off-policy data, such as mixing RL with Supervised Fine-Tuning (SFT) or using hints, they often misguide policy updates In this work, we identify a core issue underlying these failures, which we term low training affinity. This condition arises from a large distributional mismatch between external guidance and the model's policy. To diagnose this, we introduce Affinity, the first quantitative metric for monitoring exploration efficiency and training stability. To improve Affinity, we propose HINT: Helping Ineffective rollouts Navigate Towards effectiveness, an adaptive hinting framework. Instead of providing direct answers, HINT supplies heuristic hints that guide the model to discover solutions on its own, preserving its autonomous reasoning capabilities. Extensive experiments on mathematical reasoning tasks show that HINT consistently outperforms existing methods, achieving state-of-the-art results with models of various scales, while also demonstrating significantly more stable learning and greater data efficiency.Code is available on Github.
翻译:强化学习(RL)已成为提升大型语言模型(LLMs)长链思维推理能力的关键驱动力。然而,当任务难度超出模型能力时,主流方法如GRPO往往失效,导致奖励稀疏和训练效率低下。尽管先前研究尝试通过使用离策略数据来缓解此问题,例如将RL与监督微调(SFT)混合或使用提示,但这些方法常常误导策略更新。本工作中,我们揭示了这些失败背后的核心问题,即低训练亲和度。该状况源于外部指导与模型策略之间存在较大的分布不匹配。为诊断此问题,我们提出了亲和度,这是首个用于监控探索效率和训练稳定性的量化指标。为提升亲和度,我们提出HINT:引导无效推演走向有效性,一种自适应提示框架。HINT不直接提供答案,而是提供启发式提示,引导模型自主发现解决方案,从而保留其自主推理能力。在数学推理任务上的大量实验表明,HINT始终优于现有方法,在不同规模的模型上均取得了最先进的结果,同时展现出显著更稳定的学习过程和更高的数据效率。代码已在Github上公开。