Out-of-distribution (OOD) detection is critical for preventing deep learning models from making incorrect predictions to ensure the safety of artificial intelligence systems. Especially in safety-critical applications such as medical diagnosis and autonomous driving, the cost of incorrect decisions is usually unbearable. However, neural networks often suffer from the overconfidence issue, making high confidence for OOD data which are never seen during training process and may be irrelevant to training data, namely in-distribution (ID) data. Determining the reliability of the prediction is still a difficult and challenging task. In this work, we propose Uncertainty-Estimation with Normalized Logits (UE-NL), a robust learning method for OOD detection, which has three main benefits. (1) Neural networks with UE-NL treat every ID sample equally by predicting the uncertainty score of input data and the uncertainty is added into softmax function to adjust the learning strength of easy and hard samples during training phase, making the model learn robustly and accurately. (2) UE-NL enforces a constant vector norm on the logits to decouple the effect of the increasing output norm from optimization process, which causes the overconfidence issue to some extent. (3) UE-NL provides a new metric, the magnitude of uncertainty score, to detect OOD data. Experiments demonstrate that UE-NL achieves top performance on common OOD benchmarks and is more robust to noisy ID data that may be misjudged as OOD data by other methods.
翻译:在这项工作中,我们建议对标准化的逻辑(UE-NL)进行不确定性矢量规范,以抵消不断提高的OOD标准的影响。 与UE-NL建立神经网络,通过预测投入数据的不确定性分数和不确定性,对每一个ID样本一视同仁地对待每一个样本。 这种神经网络通过预测输入数据的不确定性分数和不确定性而使软体功能增加到软体功能,以调整培训阶段简单和硬抽样的学习强度,使模型能够强有力和准确地学习。 UE-NL在这项工作中建议对标准化的逻辑(UE-NL)实施一个不变性矢量规范,以抵消不断增长的OODD检测标准(U-NL)的影响,这有三个主要好处。 ()与UE-NL的神经网络通过预测输入数据的不确定性分数和不确定性增加软体功能,以调整培训阶段简单和硬体样本的学习强度,使模型能够强有力和准确地学习。 (2) UE-NL对标准化的逻辑执行一个不变性矢量规范,通过优化过程的常规产出基准的影响,使不断的OO-ND-L的精确度数据比得更高程度。