An in-depth understanding of uncertainty is the first step to making effective decisions under uncertainty. Deep/machine learning (ML/DL) has been hugely leveraged to solve complex problems involved with processing high-dimensional data. However, reasoning and quantifying different types of uncertainties to achieve effective decision-making have been much less explored in ML/DL than in other Artificial Intelligence (AI) domains. In particular, belief/evidence theories have been studied in KRR since the 1960s to reason and measure uncertainties to enhance decision-making effectiveness. We found that only a few studies have leveraged the mature uncertainty research in belief/evidence theories in ML/DL to tackle complex problems under different types of uncertainty. In this survey paper, we discuss several popular belief theories and their core ideas dealing with uncertainty causes and types and quantifying them, along with the discussions of their applicability in ML/DL. In addition, we discuss three main approaches that leverage belief theories in Deep Neural Networks (DNNs), including Evidential DNNs, Fuzzy DNNs, and Rough DNNs, in terms of their uncertainty causes, types, and quantification methods along with their applicability in diverse problem domains. Based on our in-depth survey, we discuss insights, lessons learned, limitations of the current state-of-the-art bridging belief theories and ML/DL, and finally, future research directions.
翻译:对不确定性的深入理解是作出不确定情况下有效决策的第一步。深/机器学习(ML/DL)被大量用于解决处理高层次数据的复杂问题,然而,在ML/DL中,为有效决策而提出不同类型不确定性的推理和量化,比其他人工智能(AI)领域少得多。特别是,自1960年代以来,KRR对信仰/证据理论进行了研究,以解释和衡量不确定性,从而提高决策效力。我们发现,只有少数研究利用了ML/DL信仰/证据理论中的成熟不确定性研究,以解决不同类型不确定性下的复杂问题。在本调查文件中,我们讨论了一些大众信仰理论及其涉及不确定性原因和类型的核心想法,并将其量化,同时讨论了其在ML/DL中的适用性。此外,我们讨论了在深神经网络(DNNS)中利用信仰理论的三种主要方法,包括Evinidicial DNNS、Fuzzy DNNS和RIFR NNWs, 讨论其不确定性原因、深度研究、类型以及我们所了解的深度研究领域和量化方法,以及我们所了解的当前领域、深度研究-了解的理论、MLMNNNW。