Popular approaches for quantifying predictive uncertainty in deep neural networks often involve a set of weights or models, for instance via ensembling or Monte Carlo Dropout. These techniques usually produce overhead by having to train multiple model instances or do not produce very diverse predictions. This survey aims to familiarize the reader with an alternative class of models based on the concept of Evidential Deep Learning: For unfamiliar data, they admit "what they don't know" and fall back onto a prior belief. Furthermore, they allow uncertainty estimation in a single model and forward pass by parameterizing distributions over distributions. This survey recapitulates existing works, focusing on the implementation in a classification setting. Finally, we survey the application of the same paradigm to regression problems. We also provide a reflection on the strengths and weaknesses of the mentioned approaches compared to existing ones and provide the most central theoretical results in order to inform future research.
翻译:在深神经网络中,对预测不确定性进行量化的流行方法往往涉及一套加权或模型,例如通过组合或蒙特卡洛辍学,这些技术通常通过培训多个模型实例或不提供非常多样的预测而产生间接费用。这项调查的目的是让读者熟悉基于 " 深层学习 " 概念的替代模型类别:对于不熟悉的数据来说,它们承认 " 自己不知道的 ",并回到先前的信念。此外,它们允许在单一模型中进行不确定性估计,并通过对分布分布的参数化进行分化而向前推移。这项调查对现有的工程进行了重新归纳,重点是分类环境中的实施。最后,我们调查了对回归问题的同一模式的应用情况。我们还反思了上述方法与现有方法相比的长处和短处,并提供最核心的理论结果,以便为未来的研究提供信息。