There is significant need for principled uncertainty reasoning in machine learning systems as they are increasingly deployed in safety-critical domains. A new approach with uncertainty-aware neural networks (NNs), based on learning evidential distributions for aleatoric and epistemic uncertainties, shows promise over traditional deterministic methods and typical Bayesian NNs, yet several important gaps in the theory and implementation of these networks remain. We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks. The approach derives a technique by placing evidential priors over the original Gaussian likelihood function and training the NN to infer the hyperparameters of the evidential distribution. Doing so allows for the simultaneous extraction of both uncertainties without sampling or utilization of out-of-distribution data for univariate regression tasks. We describe the outstanding issues in detail, provide a possible solution, and generalize the deep evidential regression technique for multivariate cases.
翻译:由于机器学习系统越来越多地部署在安全关键领域,因此在机器学习系统中需要有原则的不确定性推理。基于对偏向性和偏移性不确定性进行证据分布的学习,一种具有不确定性神经网络的新办法(NNs)显示对传统的确定方法和典型的巴耶斯纳尼纳尼的希望,但在理论和实施这些网络方面仍存在一些重要差距。我们讨论了三个问题,并提出了一个拟议解决办法,从基于回归的神经网络中提取偏移性和表面不确定性。这个办法产生一种技术,即对原高斯概率功能进行表面前科,培训NNs推算证据分布的超参数。这样,就可以同时提取两种不确定性,而无需取样或利用分配外数据来完成单向回归任务。我们详细描述了未决问题,提供了一种可能的解决方案,并概括了多变量案例的深层证据回归技术。