Knowing the reliability of a model's response is essential in application. With the strong generation capabilities of LLMs, research has focused on generating verbalized confidence. This is further enhanced by combining chain-of-thought reasoning, which provides logical and transparent estimation. However, how reasoning strategies affect the estimated confidence is still under-explored. In this work, we demonstrate that predicting a verbalized probability distribution can effectively encourage in-depth reasoning for confidence estimation. Intuitively, it requires an LLM to consider all candidates within the answer space instead of basing on a single guess, and to carefully assign confidence scores to meet the requirements of a distribution. This method shows an advantage across different models and various tasks, regardless of whether the answer space is known. Its advantage is maintained even after reinforcement learning, and further analysis shows its reasoning patterns are aligned with human expectations.
翻译:了解模型响应的可靠性在应用中至关重要。随着大语言模型(LLMs)强大的生成能力,研究聚焦于生成语言化置信度。结合思维链推理进一步增强了这一能力,提供了逻辑透明且可解释的估计。然而,推理策略如何影响置信度估计仍待深入探索。本研究证明,预测语言化概率分布能有效促进深度推理以进行置信度估计。直观而言,该方法要求大语言模型考虑答案空间内所有候选答案而非仅基于单一猜测,并需谨慎分配置信度分数以满足分布要求。该方法在不同模型与多样任务中均展现出优势,无论答案空间是否已知。其优势在强化学习后依然保持,进一步分析表明其推理模式符合人类预期。