Predictive uncertainties can be characterized by two properties--calibration and sharpness. This paper argues for reasoning about uncertainty in terms these properties and proposes simple algorithms for enforcing them in deep learning. Our methods focus on the strongest notion of calibration--distribution calibration--and enforce it by fitting a low-dimensional density or quantile function with a neural estimator. The resulting approach is much simpler and more broadly applicable than previous methods across both classification and regression. Empirically, we find that our methods improve predictive uncertainties on several tasks with minimal computational and implementation overhead. Our insights suggest simple and improved ways of training deep learning models that lead to accurate uncertainties that should be leveraged to improve performance across downstream applications.
翻译:预测性不确定性的特征可以是两种属性校准和清晰度。本文件论证了这些属性的不确定性的推理,并提出了在深层学习中执行这些属性的简单算法。我们的方法侧重于最强的校准-分布校准概念,并通过将一个低维密度或量化函数与神经测算仪相匹配加以执行。由此产生的方法比以往的分类和回归方法简单得多,适用范围也更广。我们发现,我们的方法改善了若干任务中的预测性不确定性,而计算和执行管理费用很少。我们的洞察力表明,培训深层学习模型的方法简单而完善,从而导致准确的不确定性,应当加以利用,以改进下游应用的性能。