We study calibration in question answering, estimating whether model correctly predicts answer for each question. Unlike prior work which mainly rely on the model's confidence score, our calibrator incorporates information about the input example (e.g., question and the evidence context). Together with data augmentation via back translation, our simple approach achieves 5-10% gains in calibration accuracy on reading comprehension benchmarks. Furthermore, we present the first calibration study in the open retrieval setting, comparing the calibration accuracy of retrieval-based span prediction models and answer generation models. Here again, our approach shows consistent gains over calibrators relying on the model confidence. Our simple and efficient calibrator can be easily adapted to many tasks and model architectures, showing robust gains in all settings.
翻译:我们在解答问题时研究校准,估计模型是否正确预测了每个问题的答案。与以前主要依赖模型信心评分的工作不同,我们的校准员纳入了关于输入示例的信息(例如,问题和证据背景 ) 。 与通过回译增加数据一起,我们的简单方法在阅读理解基准的校准精确度上实现了5-10%的增益。此外,我们在开放式检索环境中介绍了第一次校准研究,比较了基于检索的跨度预测模型和答案生成模型的校准准确度。在这里,我们的方法再次显示,在依赖模型信心的校准器上取得了一致的增益。我们简单高效的校准器可以很容易地适应许多任务和模型结构,在所有环境中都显示出强劲的增益。