There is a significant need for principled uncertainty reasoning in machine learning systems as they are increasingly deployed in safety-critical domains. A new approach with uncertainty-aware regression-based neural networks (NNs), based on learning evidential distributions for aleatoric and epistemic uncertainties, shows promise over traditional deterministic methods and typical Bayesian NNs, notably with the capabilities to disentangle aleatoric and epistemic uncertainties. Despite some empirical success of Deep Evidential Regression (DER), there are important gaps in the mathematical foundation that raise the question of why the proposed technique seemingly works. We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a heuristic rather than an exact uncertainty quantification. We go on to propose corrections and redefinitions of how aleatoric and epistemic uncertainties should be extracted from NNs.
翻译:由于机器学习系统越来越多地部署在安全关键领域,因此在机器学习系统中非常需要原则性不确定性的推理。一种基于学习对显性和显性不确定性的证据分布的基于不确定性回归神经网络(NNs)的新方法显示,对传统的确定方法和典型的巴耶斯纳尼的典型 NNs有希望,特别是能够分解偏执和显性不确定性的能力。尽管深显性回归(DER)取得了一些经验性的成功,但数学基础中存在一些重大差距,从而提出了拟议技术似乎起作用的原因。我们详细说明了理论缺陷,分析了合成和现实世界数据集的性能,表明深显性回归是一种超常,而不是精确的不确定性的量化。我们继续提议对从非典型国家中提取的确定性和认知性不确定性进行纠正和重新定义。