Modern neural networks are highly uncalibrated. It poses a significant challenge for safety-critical systems to utilise deep neural networks (DNNs), reliably. Many recently proposed approaches have demonstrated substantial progress in improving DNN calibration. However, they hardly touch upon refinement, which historically has been an essential aspect of calibration. Refinement indicates separability of a network's correct and incorrect predictions. This paper presents a theoretically and empirically supported exposition for reviewing a model's calibration and refinement. Firstly, we show the breakdown of expected calibration error (ECE), into predicted confidence and refinement. Connecting with this result, we highlight that regularisation based calibration only focuses on naively reducing a model's confidence. This logically has a severe downside to a model's refinement. We support our claims through rigorous empirical evaluations of many state of the art calibration approaches on standard datasets. We find that many calibration approaches with the likes of label smoothing, mixup etc. lower the utility of a DNN by degrading its refinement. Even under natural data shift, this calibration-refinement trade-off holds for the majority of calibration methods. These findings call for an urgent retrospective into some popular pathways taken for modern DNN calibration.
翻译:现代神经网络高度不受校准。 它给安全临界系统利用深神经网络(DNN)带来了重大挑战。 最近提出的许多方法表明,在改进DNN校准方面已经取得了很大进展。 但是,它们几乎没有触及改进,而改进历来是校准的一个基本方面。 精细表明网络正确和不正确的预测的分离性。 本文为审查模型的校准和完善提供了一个理论上和经验上得到支持的解说。 首先, 我们显示了预期校准错误(ECE)的崩溃, 变成了预测的信心和完善。 与这一结果相连接, 我们强调,基于校准的正规化仅仅侧重于天真地降低模型的信心。 这在逻辑上对模型的改进有着严重的下坡作用。 我们通过对标准数据集的艺术校准方法的许多状态进行严格的实证评估来支持我们的要求。 我们发现,许多校准方法与标签的简化、混和混合等相似的校准方法,通过降低其精细性来降低DNN的效用。 即使是在自然数据转换下, 也这样, 校准的校准贸易的某种校准性调整性调整会会以现代校准方法为多数。