Deep neural networks present impressive performance, yet they cannot reliably estimate their predictive confidence, limiting their applicability in high-risk domains. We show that applying a multi-label one-vs-all loss reveals classification ambiguity and reduces model overconfidence. The introduced SLOVA (Single Label One-Vs-All) model redefines typical one-vs-all predictive probabilities to a single label situation, where only one class is the correct answer. The proposed classifier is confident only if a single class has a high probability and other probabilities are negligible. Unlike the typical softmax function, SLOVA naturally detects out-of-distribution samples if the probabilities of all other classes are small. The model is additionally fine-tuned with exponential calibration, which allows us to precisely align the confidence score with model accuracy. We verify our approach on three tasks. First, we demonstrate that SLOVA is competitive with the state-of-the-art on in-distribution calibration. Second, the performance of SLOVA is robust under dataset shifts. Finally, our approach performs extremely well in the detection of out-of-distribution samples. Consequently, SLOVA is a tool that can be used in various applications where uncertainty modeling is required.
翻译:深心神经网络表现出令人印象深刻的性能,然而,它们无法可靠地估计其预测信任度,从而限制其在高风险域的适用性。我们显示,应用多标签一五一全损失的多标签一五全损失会显示分类的模糊性,并减少模型过度自信。引入的 SLOVA(Singlabel One-Vs-All)模型将典型的一五万全预测概率重新定义为单一标签情况的典型一五全预测性,其中只有一个等级是正确的答案。拟议的分类器只有在单级的概率高,而其他概率则微不足道的情况下才具有信心。与典型的软式移动功能不同, SLOVA自然检测出分配样本,如果所有其他类别概率小的话。该模型还用指数校准,使我们能够将信任度的评分数与模型精确性进行精确校准。首先,我们证明SLOVA具有竞争力,第二,SLOVA的性能在典型的变换数据功能下是稳健的。最后,我们在SVA的外置模中,我们的方法非常精准地表现了在SDVA的模型中所使用的模型。