Deep Learning methods are known to suffer from calibration issues: they typically produce over-confident estimates. These problems are exacerbated in the low data regime. Although the calibration of probabilistic models is well studied, calibrating extremely over-parametrized models in the low-data regime presents unique challenges. We show that deep-ensembles do not necessarily lead to improved calibration properties. In fact, we show that standard ensembling methods, when used in conjunction with modern techniques such as mixup regularization, can lead to less calibrated models. This text examines the interplay between three of the most simple and commonly used approaches to leverage deep learning when data is scarce: data-augmentation, ensembling, and post-processing calibration methods. Although standard ensembling techniques certainly help boost accuracy, we demonstrate that the calibration of deep ensembles relies on subtle trade-offs. We also find that calibration methods such as temperature scaling need to be slightly tweaked when used with deep-ensembles and, crucially, need to be executed after the averaging process. Our simulations indicate that this simple strategy can halve the Expected Calibration Error (ECE) on a range of benchmark classification problems compared to standard deep-ensembles in the low data regime.
翻译:深层学习方法已知会受到校准问题的影响:它们通常产生过于自信的估计数。这些问题在低数据制度下更加严重。尽管对概率模型的校准研究周密,但是对低数据制度中极为过度平衡模型的校准却提出了独特的挑战。我们表明,深层组合并不一定导致校准特性的改善。事实上,我们表明,标准组合方法,如果结合混合规范化等现代技术使用,可能会导致校准较少的模型。这一文本审查了三种最简单和常用的方法之间的相互作用,以便在数据稀少时利用深层学习:数据增强、组合和后处理校准方法。虽然标准的组合技术肯定有助于提高精确性,但我们表明,深海团体校准的校准取决于微妙的交替性。我们还发现,在与深层团积时,温度降缩等校准方法需要略微调整,而且关键是,在平均过程后需要执行的三种最简单、最常用的方法。我们的模拟表明,这种简单的战略可以将深度的测定差率率作为基准。