Recent advancements in large reasoning models (LRMs) like DeepSeek-R1 and OpenAI o1 series have achieved notable performance enhancements on complex reasoning tasks by scaling up the generation length by Chain-of-Thought (CoT). However, a critical issue is their tendency to produce excessively verbose reasoning processes, leading to the inefficiency problem. Existing literature on improving efficiency mainly adheres to the before-reasoning paradigms such as prompting and reasoning or fine-tuning and reasoning, but ignores the promising direction of directly encouraging the model to speak concisely by intervening during the generation of reasoning. In order to fill the blank, we propose a framework dubbed ConciseHint, which continuously encourages the reasoning model to speak concisely by injecting learnable hints (manually designed or learned on concise data) during the generation of the reasoning. Besides, ConciseHint is adaptive to the complexity of the query by adaptively adjusting the hint intensity, which ensures it will not undermine model performance. Experiments on the state-of-the-art LRMs, including DeepSeek-R1 and Qwen-3 series, demonstrate that our method can effectively produce concise reasoning while maintaining the performance well. Moreover, we show that ConciseHint is flexible and can be seamlessly integrated with existing methods to further push the upper bound of the efficiency.
翻译:近期,以DeepSeek-R1和OpenAI o1系列为代表的大型推理模型(LRMs)通过思维链(CoT)扩展生成长度,在复杂推理任务上取得了显著的性能提升。然而,一个关键问题在于其倾向于产生过度冗长的推理过程,从而导致效率低下。现有关于提升效率的研究主要遵循“推理前”范式,如提示与推理或微调与推理,却忽视了在推理生成过程中直接干预以鼓励模型简洁表达这一有前景的方向。为填补这一空白,我们提出了一个名为ConciseHint的框架,该框架通过在推理生成过程中注入可学习的提示(手动设计或在简洁数据上学习),持续激励推理模型进行简洁表达。此外,ConciseHint能够根据查询的复杂度自适应调整提示强度,从而确保不会损害模型性能。在包括DeepSeek-R1和Qwen-3系列在内的前沿LRMs上进行的实验表明,我们的方法能在保持性能的同时有效生成简洁的推理过程。此外,我们还证明ConciseHint具有灵活性,可与现有方法无缝集成,以进一步提升效率的上限。