The consideration of predictive uncertainty in medical imaging with deep learning is of utmost importance. We apply estimation of both aleatoric and epistemic uncertainty by variational Bayesian inference with Monte Carlo dropout to regression tasks and show that predictive uncertainty is systematically underestimated. We apply $ \sigma $ scaling with a single scalar value; a simple, yet effective calibration method for both types of uncertainty. The performance of our approach is evaluated on a variety of common medical regression data sets using different state-of-the-art convolutional network architectures. In our experiments, $ \sigma $ scaling is able to reliably recalibrate predictive uncertainty. It is easy to implement and maintains the accuracy. Well-calibrated uncertainty in regression allows robust rejection of unreliable predictions or detection of out-of-distribution samples. Our source code is available at https://github.com/mlaves/well-calibrated-regression-uncertainty
翻译:深层学习的医学成像中预测不确定性的考虑至关重要。 我们通过对蒙特卡洛退伍的变异巴伊西亚推论,在回归任务中估算出与蒙特卡洛退伍有关的悬浮和隐性不确定性,并表明预测不确定性被系统性地低估。 我们用单一的星标值来应用 $\ grama $ 缩放; 一种简单而有效的两种不确定性的校准方法。 我们的方法的性能是通过使用各种通用的医学回归数据集来评估的。 在我们的实验中, $\ grama 美元能够可靠地对预测不确定性进行再校准。 很容易执行并保持准确性 。 精细的回归不确定性使得能够强烈拒绝不可靠的预测或探测分布之外的样本。 我们的源代码可以在 https://github. com/mlaves/well-caligrad-regresion-uncertainty查阅 https://github. com/mlaves/ well-calimate- unclimty.