Neural networks are used for many real world applications, but often they have problems estimating their own confidence. This is particularly problematic for computer vision applications aimed at making high stakes decisions with humans and their lives. In this paper we make a meta-analysis of the literature, showing that most if not all computer vision applications do not use proper epistemic uncertainty quantification, which means that these models ignore their own limitations. We describe the consequences of using models without proper uncertainty quantification, and motivate the community to adopt versions of the models they use that have proper calibrated epistemic uncertainty, in order to enable out of distribution detection. We close the paper with a summary of challenges on estimating uncertainty for computer vision applications and recommendations.
翻译:神经网络用于许多真实世界的应用,但往往难以估计自身的信心。这在计算机视觉应用中特别成问题,这些应用旨在对人类及其生命做出重大决策。在本文中,我们对文献进行元分析,表明大多数(如果不是全部)计算机视觉应用没有使用适当的隐喻不确定性量化,这意味着这些模型忽视了自身的局限性。我们描述了使用模型而没有适当的不确定性量化的后果,并激励社区采用它们使用的模型的版本,这些模型具有适当的经校准的缩写不确定性,以便能够从分布上探测出来。我们结束本文,总结了在估算计算机视觉应用和建议的不确定性方面存在的挑战。