As neural network classifiers are deployed in real-world applications, it is crucial that their failures can be detected reliably. One practical solution is to assign confidence scores to each prediction, then use these scores to filter out possible misclassifications. However, existing confidence metrics are not yet sufficiently reliable for this role. This paper presents a new framework that produces a more reliable quantitative metric for detecting misclassification errors. This framework, RED, builds an error detector on top of the base classifier and estimates uncertainty of the detection scores using Gaussian Processes. Empirical comparisons with other error detection methods on 125 UCI datasets demonstrate that this approach is effective. Additional implementations on two probabilistic base classifiers and a large deep learning architecture solving a vision task further confirm the robustness of the method. A case study involving out-of-distribution and adversarial samples shows potential of the proposed method to improve trustworthiness of neural network classifiers more broadly in the future.
翻译:由于神经网络分类器被部署在现实世界的应用中,因此必须可靠地检测出它们的故障。一个实际的解决办法是给每个预测分配信任分数,然后用这些分数来过滤可能的错误分类。然而,现有的信心度量对于这一作用来说还不够可靠。本文提出了一个新的框架,为检测错误分类错误提供了更可靠的量化度量。这个框架,RED,在基础分类器之上建立一个错误检测器,并利用Gaussian进程估算探测分数的不确定性。在125 UCI数据集中与其他错误检测方法进行的经验性比较表明这一方法是有效的。另外,对两个概率性基本分类器和解决愿景任务的大型深层学习结构的实施进一步证实了该方法的稳健性。一个涉及分配外和对抗性抽样的案例研究表明,拟议的方法有可能在未来更广泛地提高神经网络分类器的可信赖性。