Uncertainty estimation is an essential step in the evaluation of the robustness for deep learning models in computer vision, especially when applied in risk-sensitive areas. However, most state-of-the-art deep learning models either fail to obtain uncertainty estimation or need significant modification (e.g., formulating a proper Bayesian treatment) to obtain it. Most previous methods are not able to take an arbitrary model off the shelf and generate uncertainty estimation without retraining or redesigning it. To address this gap, we perform a systematic exploration into training-free uncertainty estimation for dense regression, an unrecognized yet important problem, and provide a theoretical construction justifying such estimations. We propose three simple and scalable methods to analyze the variance of outputs from a trained network under tolerable perturbations: infer-transformation, infer-noise, and infer-dropout. They operate solely during inference, without the need to re-train, re-design, or fine-tune the model, as typically required by state-of-the-art uncertainty estimation methods. Surprisingly, even without involving such perturbations in training, our methods produce comparable or even better uncertainty estimation when compared to training-required state-of-the-art methods.
翻译:确定性估算是评价计算机愿景中深学习模型的稳健性的关键步骤,特别是在风险敏感领域应用时,对于评估计算机愿景中深层次学习模型的稳健性来说,特别是在应用到风险敏感领域的情况下,不确定性估算是一个重要的步骤。然而,大多数最先进的深深层次学习模型要么没有获得不确定性估计,要么需要进行重大修改(例如制定适当的巴伊西亚治疗方法)才能获得这种评估。大多数以前的方法都无法在架子上采用任意的模型,在没有再培训或重新设计的情况下产生不确定性估算结果。为了弥补这一差距,我们系统地探索对密集回归的不培训性不确定性估算,这是一个未被确认但重要的问题,并提供一个理论结构来证明这种估算是合理的。我们提出了三种简单和可伸缩的方法,用以分析经过培训的网络在可容忍的干扰下产出的差异:推价转换、推论、推论、推论和推论退出。这些方法仅在推论期间使用,而无需再培训、重新配置或微调模型,这是国家不确定性估算方法通常要求的。即使没有在培训中进行这种比较的不确定性估算时,我们的方法也不必涉及这种扭曲。