Deep Neural Networks (DNNs) have performed admirably in classification tasks. However, the characterization of their classification uncertainties, required for certain applications, has been lacking. In this work, we investigate the issue by assessing DNNs' ability to estimate conditional probabilities and propose a framework for systematic uncertainty characterization. Denoting the input sample as x and the category as y, the classification task of assigning a category y to a given input x can be reduced to the task of estimating the conditional probabilities p(y|x), as approximated by the DNN at its last layer using the softmax function. Since softmax yields a vector whose elements all fall in the interval (0, 1) and sum to 1, it suggests a probabilistic interpretation to the DNN's outcome. Using synthetic and real-world datasets, we look into the impact of various factors, e.g., probability density f(x) and inter-categorical sparsity, on the precision of DNNs' estimations of p(y|x), and find that the likelihood probability density and the inter-categorical sparsity have greater impacts than the prior probability to DNNs' classification uncertainty.
翻译:深心神经网络(DNNS)在分类任务方面表现得令人钦佩,然而,某些应用要求的分类不确定因素的定性一直缺乏。在这项工作中,我们通过评估DNNNs估算有条件概率的能力和提出系统不确定性定性框架来调查这一问题。将输入样本评为x,将类别和输入类别划为y分类为y,将给定输入x的分类任务降为估计条件概率p(y ⁇ x)的任务,DNNs在最后一个层使用软负负函数进行估计。由于软分子生成的矢量,其元素都在0、1和1的间隔内,因此,我们通过评估DNNN(0)和1结果的概率解释。我们使用合成和现实世界数据集,可以研究各种因素的影响,例如,概率密度f(x)和跨类的紧张性、DNNUS对p(yx)估计的精确度的影响,并发现概率密度和跨类的概率比NNNW的先前的不确定性更大。