Deep learning has achieved impressive performance on many tasks in recent years. However, it has been found that it is still not enough for deep neural networks to provide only point estimates. For high-risk tasks, we need to assess the reliability of the model predictions. This requires us to quantify the uncertainty of model prediction and construct prediction intervals. In this paper, We explore the uncertainty in deep learning to construct the prediction intervals. In general, We comprehensively consider two categories of uncertainties: aleatory uncertainty and epistemic uncertainty. We design a special loss function, which enables us to learn uncertainty without uncertainty label. We only need to supervise the learning of regression task. We learn the aleatory uncertainty implicitly from the loss function. And that epistemic uncertainty is accounted for in ensembled form. Our method correlates the construction of prediction intervals with the uncertainty estimation. Impressive results on some publicly available datasets show that the performance of our method is competitive with other state-of-the-art methods.
翻译:近年来,深层学习在许多任务上取得了令人印象深刻的成绩。然而,人们发现,对于深神经网络来说,仅仅提供点估计是不够的。对于高风险任务,我们需要评估模型预测的可靠性。这要求我们量化模型预测的不确定性,并构建预测间隔。在本文中,我们探索深层学习的不确定性,以构建预测间隔。总的来说,我们全面考虑两类不确定性:明显的不确定性和隐含的不确定性。我们设计了一种特殊的损失功能,使我们能够在没有不确定性的标签的情况下学习不确定性。我们只需要监督回归任务的学习。我们只需要从损失函数中隐含地学习吸收的不确定性。我们的方法以混合的形式来计算这些不确定性。我们的方法将预测间隔的构建与不确定性的估计联系起来。一些公开提供的数据集的令人印象深刻的结果显示,我们方法的性能与其他最先进的方法相比是竞争性的。