We introduce a framework for uncertainty estimation that both describes and extends many existing methods. We consider typical hyperparameters involved in classical training as random variables and marginalise them out to capture various sources of uncertainty in the parameter space. We investigate which forms and combinations of marginalisation are most useful from a practical point of view on standard benchmarking data sets. Moreover, we discuss how some marginalisations may produce reliable estimates of uncertainty without the need for extensive hyperparameter tuning and/or large-scale ensembling.
翻译:我们引入了一个描述和扩展许多现有方法的不确定性估算框架。我们认为典型的超参数在古典培训中是随机变数,将其排挤出去以捕捉参数空间的各种不确定性来源。我们从标准基准数据集的实用角度看,调查边缘化的哪些形式和组合最为有用。此外,我们讨论了一些边缘化如何在不需要大规模超参数调整和(或)大规模组合的情况下产生可靠的不确定性估算。