If Uncertainty Quantification (UQ) is crucial to achieve trustworthy Machine Learning (ML), most UQ methods suffer from disparate and inconsistent evaluation protocols. We claim this inconsistency results from the unclear requirements the community expects from UQ. This opinion paper offers a new perspective by specifying those requirements through five downstream tasks where we expect uncertainty scores to have substantial predictive power. We design these downstream tasks carefully to reflect real-life usage of ML models. On an example benchmark of 7 classification datasets, we did not observe statistical superiority of state-of-the-art intrinsic UQ methods against simple baselines. We believe that our findings question the very rationale of why we quantify uncertainty and call for a standardized protocol for UQ evaluation based on metrics proven to be relevant for the ML practitioner.
翻译:如果不确定性量化(UQ)对于实现可信赖的机器学习至关重要,那么大多数UQ方法都存在不同和不一致的评价程序。我们声称,这种不一致是由于社区对UQ的要求不明确造成的。本意见文件提供了一个新的视角,通过五项下游任务具体说明了这些要求,在这些任务中,我们预计不确定分数具有巨大的预测力。我们仔细设计这些下游任务,以反映ML模型的实际使用情况。关于7个分类数据集的范例基准,我们没有看到最先进的内在UQ方法在统计上优于简单的基线。我们认为,我们的调查结果质疑为什么我们量化不确定性的理由,并要求根据经证明与ML实践者相关的标准,制定标准化的UQ评估协议。