Traditional deep neural networks (NNs) have significantly contributed to the state-of-the-art performance in the task of classification under various application domains. However, NNs have not considered inherent uncertainty in data associated with the class probabilities where misclassification under uncertainty may easily introduce high risk in decision making in real-world contexts (e.g., misclassification of objects in roads leads to serious accidents). Unlike Bayesian NN that indirectly infer uncertainty through weight uncertainties, evidential NNs (ENNs) have been recently proposed to explicitly model the uncertainty of class probabilities and use them for classification tasks. An ENN offers the formulation of the predictions of NNs as subjective opinions and learns the function by collecting an amount of evidence that can form the subjective opinions by a deterministic NN from data. However, the ENN is trained as a black box without explicitly considering inherent uncertainty in data with their different root causes, such as vacuity (i.e., uncertainty due to a lack of evidence) or dissonance (i.e., uncertainty due to conflicting evidence). By considering the multidimensional uncertainty, we proposed a novel uncertainty-aware evidential NN called WGAN-ENN (WENN) for solving an out-of-distribution (OOD) detection problem. We took a hybrid approach that combines Wasserstein Generative Adversarial Network (WGAN) with ENNs to jointly train a model with prior knowledge of a certain class, which has high vacuity for OOD samples. Via extensive empirical experiments based on both synthetic and real-world datasets, we demonstrated that the estimation of uncertainty by WENN can significantly help distinguish OOD samples from boundary samples. WENN outperformed in OOD detection when compared with other competitive counterparts.
翻译:传统的深心神经网络(NNS)对不同应用领域分类任务中最先进的性能做出了重大贡献。然而,NNS并没有考虑与等级概率相关的数据内在不确定性,因为不确定性下的错误分类很容易在现实世界环境中带来高决策风险(例如,道路中物体分类错误导致严重事故 ) 。 与Bayesian NNN(通过重量不确定性间接推导不确定性)不同,Secial NNS(ENNS)最近提议明确模拟等级概率的不确定性,并将其用于分类任务。 ENNS将NNS的预测作为主观意见,并通过收集大量证据来学习这一功能。 然而,ENNE被训练成一个黑盒,而没有明确考虑数据及其不同根源的内在不确定性,例如,体重不稳定(例如,由于缺乏证据,不确定因素是广泛的证据)或混乱(例如,不确定因素是来自相互冲突的证据 ),NNWO(O-NO) 的不确定性与S-NWO的不确定性,我们用一个新的数据,我们用NWO-NO-NO的不确定性来证明一个新的数据。