Deployed language models must decide not only what to answer but also when not to answer. We present UniCR, a unified framework that turns heterogeneous uncertainty evidence including sequence likelihoods, self-consistency dispersion, retrieval compatibility, and tool or verifier feedback into a calibrated probability of correctness and then enforces a user-specified error budget via principled refusal. UniCR learns a lightweight calibration head with temperature scaling and proper scoring, supports API-only models through black-box features, and offers distribution-free guarantees using conformal risk control. For long-form generation, we align confidence with semantic fidelity by supervising on atomic factuality scores derived from retrieved evidence, reducing confident hallucinations while preserving coverage. Experiments on short-form QA, code generation with execution tests, and retrieval-augmented long-form QA show consistent improvements in calibration metrics, lower area under the risk-coverage curve, and higher coverage at fixed risk compared to entropy or logit thresholds, post-hoc calibrators, and end-to-end selective baselines. Analyses reveal that evidence contradiction, semantic dispersion, and tool inconsistency are the dominant drivers of abstention, yielding informative user-facing refusal messages. The result is a portable recipe of evidence fusion to calibrated probability to risk-controlled decision that improves trustworthiness without fine-tuning the base model and remains valid under distribution shift.
翻译:已部署的语言模型不仅需要决定如何回答,还必须判断何时不应回答。本文提出UniCR,一个统一框架,能够将序列似然度、自洽性离散度、检索兼容性以及工具或验证器反馈等异构不确定性证据转化为校准后的正确概率,进而通过原则性拒绝机制强制执行用户指定的误差预算。UniCR通过温度缩放和适当评分学习轻量级校准头,支持基于黑箱特征的纯API模型,并利用保形风险控制提供无分布保证。针对长文本生成,我们通过基于检索证据衍生的原子事实性分数进行监督,使置信度与语义保真度对齐,从而在保持覆盖范围的同时减少自信幻觉。在短问答、带执行测试的代码生成以及检索增强的长问答上的实验表明,相较于熵或逻辑阈值、事后校准器以及端到端选择性基线,本方法在校准指标上取得持续改进,风险-覆盖曲线下面积更低,且在固定风险下获得更高覆盖率。分析表明,证据矛盾、语义离散和工具不一致性是拒绝决策的主要驱动因素,可生成面向用户的信息性拒绝提示。最终形成了一套可移植的流程:从证据融合到校准概率,再到风险控制决策,该流程无需微调基础模型即可提升可信度,并在分布偏移下保持有效性。