Deep neural networks are powerful predictors for a variety of tasks. However, they do not capture uncertainty directly. Using neural network ensembles to quantify uncertainty is competitive with approaches based on Bayesian neural networks while benefiting from better computational scalability. However, building ensembles of neural networks is a challenging task because, in addition to choosing the right neural architecture or hyperparameters for each member of the ensemble, there is an added cost of training each model. We propose AutoDEUQ, an automated approach for generating an ensemble of deep neural networks. Our approach leverages joint neural architecture and hyperparameter search to generate ensembles. We use the law of total variance to decompose the predictive variance of deep ensembles into aleatoric (data) and epistemic (model) uncertainties. We show that AutoDEUQ outperforms probabilistic backpropagation, Monte Carlo dropout, deep ensemble, distribution-free ensembles, and hyper ensemble methods on a number of regression benchmarks.
翻译:深心神经网络是各种任务的强大预测器。 但是,它们不能直接捕捉不确定性。 使用神经网络集合来量化不确定性具有竞争力, 使用基于贝叶西亚神经网络的方法进行竞争, 同时受益于更好的计算缩放性。 然而, 建造神经网络集合是一项艰巨的任务, 因为除了为共同体的每个成员选择正确的神经结构或超参数外, 每个模型的培训成本也会增加。 我们提出AutoDEUQ, 这是一种生成深心神经网络集合的自动化方法。 我们的方法利用联合神经结构和超分仪搜索来生成聚合物。 我们使用整体差异法将深度集合的预测差异分解成单数( 数据) 和 缩略( 模型) 不确定性。 我们显示, AutoDEUQ 在一系列回归基准上, 超越了不稳定性反调、 蒙特卡洛、 深度混合、 分配- 组合和超常组合方法。