The prediction accuracy of machine learning methods is steadily increasing, but the calibration of their uncertainty predictions poses a significant challenge. Numerous works focus on obtaining well-calibrated predictive models, but less is known about reliably assessing model calibration. This limits our ability to know when algorithms for improving calibration have a real effect, and when their improvements are merely artifacts due to random noise in finite datasets. In this work, we consider detecting mis-calibration of predictive models using a finite validation dataset as a hypothesis testing problem. The null hypothesis is that the predictive model is calibrated, while the alternative hypothesis is that the deviation from calibration is sufficiently large. We find that detecting mis-calibration is only possible when the conditional probabilities of the classes are sufficiently smooth functions of the predictions. When the conditional class probabilities are H\"older continuous, we propose T-Cal, a minimax optimal test for calibration based on a debiased plug-in estimator of the $\ell_2$-Expected Calibration Error (ECE). We further propose Adaptive T-Cal, a version that is adaptive to unknown smoothness. We verify our theoretical findings with a broad range of experiments, including with several popular deep neural net architectures and several standard post-hoc calibration methods. T-Cal is a practical general-purpose tool, which -- combined with classical tests for discrete-valued predictors -- can be used to test the calibration of virtually any probabilistic classification method.
翻译:机器学习方法的预测准确性正在稳步提高,但对其不确定性预测的校准却带来了巨大的挑战。许多工作的重点是获得精确校准的预测模型,但对于可靠地评估模型校准却不太了解。这限制了我们了解改进校准的算法何时具有真正效果的能力,当它们的改进只是因有限数据集中的随机噪音而导致的人工制品时。在这项工作中,我们考虑用一个有限的验证数据集作为假设测试问题来检测预测模型的校准错误。无效假设是校准了预测模型,而替代假设是校准的偏差足够大。我们发现,只有在等级的有条件的校准算法具有足够平稳的预测功能时,才能发现误校准。当条件级的概率是H\"老的连续性时,我们建议T-Cal,一个基于不偏差的直径直的直线插点的校准模型,一个以美元==2美元计算校准的精确误差值为基础的模型。我们进一步提出“精确校准”的精确度测试方法,其中含有若干不甚明的精确的理论级测试方法,我们用了一个不固定的校准的校准的校准的校准方法,一个用来的校正。</s>