Model calibration aims to adjust (calibrate) models' confidence so that they match expected accuracy. We argue that the traditional evaluation of calibration (expected calibration error; ECE) does not reflect usefulness of the model confidence. For example, after conventional temperature scaling, confidence scores become similar for all predictions, which makes it hard for users to distinguish correct predictions from wrong ones, even though it achieves low ECE. Building on those observations, we propose a new calibration metric, MacroCE, that better captures whether the model assigns low confidence to wrong predictions and high confidence to correct predictions. We examine various conventional calibration methods including temperature scaling, feature-based classifier, neural answer reranking, and label smoothing, all of which do not bring significant gains under our new MacroCE metric. Towards more effective calibration, we propose a new calibration method based on the model's prediction consistency along the training trajectory. This new method, which we name as consistency calibration, shows promise for better calibration.
翻译:模型校准的目的是调整( 校准) 模型的可信度, 使其与预期的准确性相匹配。 我们争辩说, 传统的校准评估( 理想校准错误; 欧洲经委会) 并不反映模型信心的有用性。 例如, 在常规的温度缩放后, 信心评分在所有预测中都变得相似, 这使得用户很难区分正确的预测和错误的预测, 即使它达到低的欧经委。 基于这些观察, 我们建议一个新的校准度量度( MacroCE), 更好地捕捉模型是否对错误预测的可信度低, 以及纠正预测的可信度高。 我们检查了各种常规校准方法, 包括温度缩放、 基于地貌的分类、 神经应答重排和标签, 所有这些方法在我们的新的宏观CERCE 度指标下都没有带来显著的收益。 为了更有效地校准, 我们根据模型的预测一致性沿培训轨迹提出了一个新的校准方法。 我们称之为一致性校准, 这种方法显示了更好的校准前景。