Deep neural networks have seen enormous success in various real-world applications. Beyond their predictions as point estimates, increasing attention has been focused on quantifying the uncertainty of their predictions. In this review, we show that the uncertainty of deep neural networks is not only important in a sense of interpretability and transparency, but also crucial in further advancing their performance, particularly in learning systems seeking robustness and efficiency. We will generalize the definition of the uncertainty of deep neural networks to any number or vector that is associated with an input or an input-label pair, and catalog existing methods on ``mining'' such uncertainty from a deep model. We will include those methods from the classic field of uncertainty quantification as well as those methods that are specific to deep neural networks. We then show a wide spectrum of applications of such generalized uncertainty in realistic learning tasks including robust learning such as noisy learning, adversarially robust learning; data-efficient learning such as semi-supervised and weakly-supervised learning; and model-efficient learning such as model compression and knowledge distillation.
翻译:深心神经网络在现实世界的各种应用中取得了巨大成功。 除了作为点数估计的预测外,人们越来越重视量化预测的不确定性。在本次审查中,我们表明深心神经网络的不确定性不仅在解释和透明度方面很重要,而且在进一步提高其性能方面,特别是在寻求稳健和效率的学习系统方面,也至关重要。我们将将深心神经网络不确定性的定义推广到与投入或投入标签对子相关的任何数量或矢量,以及“从深心模型中挖掘这种不确定性的现有方法”的目录。我们将包括典型的不确定性量化领域的方法以及深心神经网络特有的方法。然后我们将在现实的学习任务中展示这种普遍不确定性的广泛应用,包括强力学习,例如杂音学习、强力对抗性强的学习;数据高效学习,例如半超和弱力超力的学习;以及模型压缩和知识蒸馏等模式高效学习。