Distribution shift is an important concern in deep image classification, produced either by corruption of the source images, or a complete change, with the solution involving domain adaptation. While the primary goal is to improve accuracy under distribution shift, an important secondary goal is uncertainty estimation: evaluating the probability that the prediction of a model is correct. While improving accuracy is hard, uncertainty estimation turns out to be frustratingly easy. Prior works have appended uncertainty estimation into the model and training paradigm in various ways. Instead, we show that we can estimate uncertainty by simply exposing the original model to corrupted images, and performing simple statistical calibration on the image outputs. Our frustratingly easy methods demonstrate superior performance on a wide range of distribution shifts as well as on unsupervised domain adaptation tasks, measured through extensive experimentation.
翻译:分布变化是深层图像分类中的一个重要关切,这种分类要么是源图像的腐败,要么是完全变化,其解决办法涉及领域适应。虽然首要目标是提高分布变化中的准确性,但一个重要的次要目标是不确定性估计:评估模型预测正确性的概率;虽然提高准确性很困难,但不确定性估计却容易发生,令人沮丧。先前的工程以各种方式将不确定性估计附在模型和培训模式中。相反,我们表明,我们可以通过简单地将原始模型暴露在腐败图像中,对图像输出进行简单的统计校准,来估计不确定性。我们令人沮丧的简单方法显示了在广泛的分布变化中以及不受监督的域适应任务上的优异性,通过广泛的实验来衡量。