Calibrating deep learning models to yield uncertainty-aware predictions is crucial as deep neural networks get increasingly deployed in safety-critical applications. While existing post-hoc calibration methods achieve impressive results on in-domain test datasets, they are limited by their inability to yield reliable uncertainty estimates in domain-shift and out-of-domain (OOD) scenarios. We aim to bridge this gap by proposing DAC, an accuracy-preserving as well as Density-Aware Calibration method based on k-nearest-neighbors (KNN). In contrast to existing post-hoc methods, we utilize hidden layers of classifiers as a source for uncertainty-related information and study their importance. We show that DAC is a generic method that can readily be combined with state-of-the-art post-hoc methods. DAC boosts the robustness of calibration performance in domain-shift and OOD, while maintaining excellent in-domain predictive uncertainty estimates. We demonstrate that DAC leads to consistently better calibration across a large number of model architectures, datasets, and metrics. Additionally, we show that DAC improves calibration substantially on recent large-scale neural networks pre-trained on vast amounts of data.
翻译:深度校准模型以得出对不确定性的预测,这一点至关重要,因为深神经网络越来越多地在安全关键应用中部署深神经网络。虽然现有封闭后校准方法在内部测试数据集方面取得了令人印象深刻的成果,但由于无法在域档和域外情景(OOOD)中得出可靠的不确定性估计,这些方法受到限制。我们的目标是缩小这一差距,方法是根据 k- 近距离邻居(KNNN) 提出一个精确保存和密度- 软件校准方法,根据 k- 近距离测算法(KNN) 提出校准功能的强健性,同时保持良好的内部预测性不确定性估计。与现有的热后方法不同,我们利用隐藏的分类层作为不确定性相关信息的来源,并研究其重要性。我们表明,发援会是一种通用方法,可以随时与最新先进的域档后方法相结合。 发援会提高了域档和OOOD的校准性,同时保持良好的内置预测性不确定性估计值。我们证明,发援会在大量模型结构、数据集和测量前度方面不断改进。此外,我们还显示,发援会对最近大规模神经网络进行了大量数据校准。