Large Reasoning Models (LRMs) extend large language models with explicit, multi-step reasoning traces to enhance transparency and performance on complex tasks. However, these reasoning traces can be redundant or logically inconsistent, becoming a new and hard-to-detect source of hallucination. Existing hallucination detection methods focus primarily on answer-level uncertainty and often fail to detect hallucinations or logical inconsistencies arising from the model's reasoning trace. This oversight is particularly problematic for LRMs, where the explicit thinking trace is not only an important support to the model's decision-making process but also a key source of potential hallucination. To this end, we propose RACE (Reasoning and Answer Consistency Evaluation), a novel framework specifically tailored for hallucination detection in LRMs. RACE operates by extracting essential reasoning steps and computing four diagnostic signals: inter-sample consistency of reasoning traces, entropy-based answer uncertainty, semantic alignment between reasoning and answers, and internal coherence of reasoning. The joint utilization of these signals makes RACE a more robust detector of hallucinations in LRMs. Experiments across datasets and different LLMs demonstrate that RACE outperforms existing hallucination detection baselines, offering a robust and generalizable solution for evaluating LRMs. The source code is available at https://github.com/bebr2/RACE
翻译:大推理模型通过引入显式的多步推理轨迹来扩展大语言模型,以提升复杂任务中的透明度和性能。然而,这些推理轨迹可能冗余或逻辑不一致,成为难以检测的新型幻觉来源。现有的幻觉检测方法主要关注答案层面的不确定性,往往无法检测由模型推理轨迹产生的幻觉或逻辑不一致问题。这一疏忽对大推理模型尤为关键,因为显式思维轨迹不仅是模型决策过程的重要支撑,也是潜在幻觉的关键来源。为此,我们提出RACE(推理与答案一致性评估),一种专为大推理模型幻觉检测设计的新框架。RACE通过提取核心推理步骤并计算四种诊断信号来运作:推理轨迹的样本间一致性、基于熵的答案不确定性、推理与答案的语义对齐性,以及推理的内部连贯性。这些信号的联合使用使RACE成为更稳健的大推理模型幻觉检测器。跨数据集和不同大语言模型的实验表明,RACE优于现有幻觉检测基线方法,为大推理模型的评估提供了稳健且可泛化的解决方案。源代码发布于https://github.com/bebr2/RACE