Popular approaches for quantifying predictive uncertainty in deep neural networks often involve distributions over weights or multiple models, for instance via Markov Chain sampling, ensembling, or Monte Carlo dropout. These techniques usually incur overhead by having to train multiple model instances or do not produce very diverse predictions. This comprehensive and extensive survey aims to familiarize the reader with an alternative class of models based on the concept of Evidential Deep Learning: For unfamiliar data, they aim to admit "what they don't know", and fall back onto a prior belief. Furthermore, they allow uncertainty estimation in a single model and forward pass by parameterizing distributions over distributions. This survey recapitulates existing works, focusing on the implementation in a classification setting, before surveying the application of the same paradigm to regression. We also reflect on the strengths and weaknesses compared to other existing methods and provide the most fundamental derivations using a unified notation to aid future research.
翻译:在深神经网络中,对预测不确定性进行量化的流行方法往往涉及对重量或多种模型的分布,例如通过Markov链抽样、集合或Monte Carlo辍学,这些技术通常由于必须培训多种模型实例或没有提出非常不同的预测而产生间接费用。这一全面而广泛的调查旨在让读者熟悉基于 " 证明深刻学习:对于不熟悉的数据 " 概念的替代模型类别,它们的目的是承认“他们不知道的”数据,并回到先前的观念。此外,它们允许在单一模型中进行不确定性估计,并通过对分布分布的分布进行参数化来向前推移。本调查对现有的工作进行总结,重点是在分类环境中实施,然后调查同一模型在回归方面的应用情况。我们还反思与其他现有方法相比的优缺点,并利用统一标记来提供最根本的衍生结果,以帮助今后的研究。</s>