Deployed language models must decide not only what to answer but also when not to answer. We present UniCR, a unified framework that turns heterogeneous uncertainty evidence including sequence likelihoods, self-consistency dispersion, retrieval compatibility, and tool or verifier feedback into a calibrated probability of correctness and then enforces a user-specified error budget via principled refusal. UniCR learns a lightweight calibration head with temperature scaling and proper scoring, supports API-only models through black-box features, and offers distribution-free guarantees using conformal risk control. For long-form generation, we align confidence with semantic fidelity by supervising on atomic factuality scores derived from retrieved evidence, reducing confident hallucinations while preserving coverage. Experiments on short-form QA, code generation with execution tests, and retrieval-augmented long-form QA show consistent improvements in calibration metrics, lower area under the risk-coverage curve, and higher coverage at fixed risk compared to entropy or logit thresholds, post-hoc calibrators, and end-to-end selective baselines. Analyses reveal that evidence contradiction, semantic dispersion, and tool inconsistency are the dominant drivers of abstention, yielding informative user-facing refusal messages. The result is a portable recipe of evidence fusion to calibrated probability to risk-controlled decision that improves trustworthiness without fine-tuning the base model and remains valid under distribution shift.
翻译:部署的语言模型不仅需要决定回答什么,还需要决定何时不回答。本文提出UniCR,一个统一框架,能够将序列似然度、自洽性离散度、检索兼容性以及工具或验证器反馈等异构不确定性证据转化为经过校准的正确概率,进而通过基于原则的拒答机制强制执行用户指定的误差预算。UniCR通过温度缩放和适当评分学习轻量级校准头,支持仅通过API访问的黑盒模型特征,并利用保形风险控制提供无需分布假设的保证。针对长文本生成,我们通过基于检索证据衍生的原子事实性分数进行监督,使置信度与语义保真度对齐,从而在保持覆盖范围的同时减少自信的幻觉。在短形式问答、带执行测试的代码生成以及检索增强的长形式问答上的实验表明,相较于基于熵或逻辑阈值的基线、事后校准器以及端到端选择性基线,本方法在校准指标上取得持续改进,风险-覆盖曲线下面积更低,且在固定风险下获得更高覆盖。分析表明,证据矛盾、语义离散和工具不一致性是驱动模型弃答的主要因素,并能生成面向用户的有信息量的拒答消息。最终形成了一套可移植的流程:从证据融合到校准概率,再到风险控制决策,无需微调基础模型即可提升可信度,且在分布偏移下依然有效。