Large Reasoning Models (LRMs) demonstrate remarkable capabilities on complex reasoning tasks but remain vulnerable to severe safety risks, including harmful content generation and jailbreak attacks. Existing mitigation strategies rely on injecting heuristic safety signals during training, which often suppress reasoning ability and fail to resolve the safety-reasoning trade-off. To systematically investigate this issue, we analyze the reasoning trajectories of diverse LRMs and uncover a phenomenon we term Self-Jailbreak, where models override their own risk assessments and justify responding to unsafe prompts. This finding reveals that LRMs inherently possess the ability to reject unsafe queries, but this ability is compromised, resulting in harmful outputs. Building on these insights, we propose the Chain-of-Guardrail (CoG), a training framework that recomposes or backtracks unsafe reasoning steps, steering the model back onto safe trajectories while preserving valid reasoning chains. Extensive experiments across multiple reasoning and safety benchmarks demonstrate that CoG substantially improves the safety of current LRMs while preserving comparable reasoning ability, significantly outperforming prior methods that suffer from severe safety-reasoning trade-offs.
翻译:大型推理模型在复杂推理任务中展现出卓越能力,但仍面临严重的安全风险,包括有害内容生成和越狱攻击。现有缓解策略依赖在训练过程中注入启发式安全信号,这通常会抑制推理能力,且无法解决安全与推理之间的权衡问题。为系统研究此问题,我们分析了多种大型推理模型的推理轨迹,发现了一种称为“自我越狱”的现象:模型会覆盖自身的风险评估,并为响应不安全提示进行辩护。这一发现表明,大型推理模型本质上具备拒绝不安全查询的能力,但该能力受到损害,导致有害输出。基于这些洞见,我们提出了链式护栏训练框架,该框架通过重组或回溯不安全的推理步骤,引导模型回归安全轨迹,同时保留有效的推理链。在多个推理与安全基准上的广泛实验表明,链式护栏能显著提升现有大型推理模型的安全性,同时保持相当的推理能力,明显优于先前那些在安全与推理权衡上存在严重缺陷的方法。