Deep neural networks achieve high prediction accuracy when the train and test distributions coincide. However, in practice various types of corruptions can deviate from this setup and performance can be heavily degraded. There have been only a few methods to address generalization in presence of unexpected domain shifts observed during deployment. In this paper, a misclassification-aware Gaussian smoothing approach is presented to improve the robustness of image classifiers against a variety of corruptions while maintaining clean accuracy. The intuition behind our proposed misclassification-aware objective is revealed through bounds on the local loss deviation in the small-noise regime. When our method is coupled with additional data augmentations, it is empirically shown to improve upon the state-of-the-art in robustness and uncertainty calibration on several image classification tasks.
翻译:深神经网络在列车和测试分布同时达到高预测准确度。 但是,在实践中,各种类型的腐败可能偏离这一设置,业绩可能严重退化。 在部署期间观察到意外的域变换时,只有少数方法可以解决一般化问题。 本文介绍了一个错误分类-认知高斯平滑的方法,以提高图像分类者抵御各种腐败的稳健性,同时保持清洁准确性。 我们拟议分类错误识别目标背后的直觉通过小音系统局部损失变差的界限而暴露出来。 当我们的方法与额外数据扩增相结合时,从经验上表明,在一些图像分类任务上,在稳健性和不确定性校准方面最先进的方法将有所改进。