Deep neural network (DNN) classifiers are often overconfident, producing miscalibrated class probabilities. Most existing calibration methods either lack theoretical guarantees for producing calibrated outputs or reduce the classification accuracy in the process. This paper proposes a new Kernel-based calibration method called KCal. Unlike other calibration procedures, KCal does not operate directly on the logits or softmax outputs of the DNN. Instead, it uses the penultimate-layer latent embedding to train a metric space in a supervised manner. In effect, KCal amounts to a supervised dimensionality reduction of the neural network embedding, and generates a prediction using kernel density estimation on a holdout calibration set. We first analyze KCal theoretically, showing that it enjoys a provable asymptotic calibration guarantee. Then, through extensive experiments, we confirm that KCal consistently outperforms existing calibration methods in terms of both the classification accuracy and the (confidence and class-wise) calibration error.
翻译:深神经网络( DNNN) 分类器往往过于自信, 产生分类概率错误。 大多数现有的校准方法要么缺乏生产校准输出的理论保障, 要么降低该过程中的分类准确性。 本文提出了一个新的以内核为基础的校准方法 KCal 。 与其他校准程序不同, KCal 并不直接在 DNN 的对数或软成像输出上操作。 相反, 它使用倒数层潜潜嵌来以有监督的方式训练一个公制空间。 实际上, KCal 相当于神经网络嵌嵌入的受监督的维度减少, 并且使用悬置校准准定装置的内核密度估计作出预测 。 我们首先从理论上分析 KCal, 表明它享有一种可被确认的、 防腐校准的校准保证 。 然后, 我们通过广泛的实验, 我们确认 KCal 在分类准确性和( 信任 和 类比) 校准错误方面始终超越现有的校准方法 。