Reliable application of machine learning-based decision systems in the wild is one of the major challenges currently investigated by the field. A large portion of established approaches aims to detect erroneous predictions by means of assigning confidence scores. This confidence may be obtained by either quantifying the model's predictive uncertainty, learning explicit scoring functions, or assessing whether the input is in line with the training distribution. Curiously, while these approaches all state to address the same eventual goal of detecting failures of a classifier upon real-life application, they currently constitute largely separated research fields with individual evaluation protocols, which either exclude a substantial part of relevant methods or ignore large parts of relevant failure sources. In this work, we systematically reveal current pitfalls caused by these inconsistencies and derive requirements for a holistic and realistic evaluation of failure detection. To demonstrate the relevance of this unified perspective, we present a large-scale empirical study for the first time enabling benchmarking confidence scoring functions w.r.t all relevant methods and failure sources. The revelation of a simple softmax response baseline as the overall best performing method underlines the drastic shortcomings of current evaluation in the abundance of publicized research on confidence scoring. Code and trained models are at https://github.com/IML-DKFZ/fd-shifts.
翻译:在野外可靠应用机器学习决策系统是目前实地调查的主要挑战之一。大部分既定方法的目的是通过分配信任分数来检测错误预测,通过量化模型的预测不确定性、学习明确的评分功能或评估投入是否与培训分布一致,可以取得这种信任。奇怪的是,虽然所有这些方法都是为了解决在实际应用中发现一个分类员失败的同样最终目标,但目前它们在很大程度上构成与个别评价规程分开的研究领域,这些规程要么排除了相关方法的一大部分,要么忽略了相关故障源的大部分。在这项工作中,我们系统地揭示了这些不一致造成的当前陷阱,并提出了全面、现实地评估检测失败的要求。为了证明这一统一观点的相关性,我们首次提出了大规模的经验研究,以便能够将信任评分功能w.r.t所有相关方法和故障源作为基准进行基准。发现一个简单的软分子响应基线,作为总体最佳执行方法,突出了当前对大量公开的信用评分研究中存在的严重缺陷。代码和经过培训的模型在https://Kith/Treports/Ztreports。