Recent works have shown that language models (LM) capture different types of knowledge regarding facts or common sense. However, because no model is perfect, they still fail to provide appropriate answers in many cases. In this paper, we ask the question "how can we know when language models know, with confidence, the answer to a particular query?" We examine this question from the point of view of calibration, the property of a probabilistic model's predicted probabilities actually being well correlated with the probability of correctness. We first examine a state-of-the-art generative QA model, T5, and examine whether its probabilities are well calibrated, finding the answer is a relatively emphatic no. We then examine methods to calibrate such models to make their confidence scores correlate better with the likelihood of correctness through fine-tuning, post-hoc probability modification, or adjustment of the predicted outputs or inputs. Experiments on a diverse range of datasets demonstrate the effectiveness of our methods. We also perform analysis to study the strengths and limitations of these methods, shedding light on further improvements that may be made in methods for calibrating LMs.
翻译:最近的工作表明,语言模型(LM)能够捕捉到关于事实或常识的不同类型的知识,然而,由于没有模型是完美的,它们仍然无法在许多情况中提供适当的答案。在本文中,我们问一个问题,“当语言模型自信地知道某个特定问题的答案时,我们如何知道?”我们从校准的角度来研究这一问题,一个概率模型预测概率的特性实际上与正确性概率密切相关。我们首先研究一个最先进的基因化QA模型T5, 并研究其概率是否很好校准,发现答案比较强烈。我们然后研究如何调整这些模型,以便使其信心分数与通过微调、后热概率修改或调整预测产出或投入的正确性的可能性更加相关。对各种数据集的实验证明了我们的方法的有效性。我们还进行了分析,以研究这些方法的优点和局限性,并了解如何进一步改进LMS校准方法。