We show that a GPT-3 model can learn to express uncertainty about its own answers in natural language -- without use of model logits. When given a question, the model generates both an answer and a level of confidence (e.g. "90% confidence" or "high confidence"). These levels map to probabilities that are well calibrated. The model also remains moderately calibrated under distribution shift, and is sensitive to uncertainty in its own answers, rather than imitating human examples. To our knowledge, this is the first time a model has been shown to express calibrated uncertainty about its own answers in natural language. For testing calibration, we introduce the CalibratedMath suite of tasks. We compare the calibration of uncertainty expressed in words ("verbalized probability") to uncertainty extracted from model logits. Both kinds of uncertainty are capable of generalizing calibration under distribution shift. We also provide evidence that GPT-3's ability to generalize calibration depends on pre-trained latent representations that correlate with epistemic uncertainty over its answers.
翻译:我们显示,GPT-3模型可以学会用自然语言表达自己答案的不确定性 -- -- 不使用模型日志。 当给出一个问题时, 模型产生答案和信任度( 例如“ 90% 信任度 ” 或“ 高度信任 ” ) 。 这些水平图可以精确地测量概率。 模型在分布变化中仍然保持中度校准, 并且对其自身答案的不确定性敏感, 而不是模仿人类实例。 据我们所知, 这是首次展示一个模型, 以自然语言表达自己答案的校准不确定性。 为了测试校准, 我们引入了校准马思任务套。 我们比较了用单词表达的不确定性( “ 口头概率概率 ” ) 与从模型日志中提取的不确定性的校准。 这两种不确定性都能在分布变化中普遍校准。 我们还提供了证据, GPT-3 是否有能力普遍校准校准校准校准校准, 取决于经过事先训练的潜质描述, 与解说性不确定性的答案相关。