Image classification has significantly improved using deep learning. This is mainly due to convolutional neural networks (CNNs) that are capable of learning rich feature extractors from large datasets. However, most deep learning classification methods are trained on clean images and are not robust when handling noisy ones, even if a restoration preprocessing step is applied. While novel methods address this problem, they rely on modified feature extractors and thus necessitate retraining. We instead propose a method that can be applied on a $pretrained$ classifier. Our method exploits a fidelity map estimate that is fused into the internal representations of the feature extractor, thereby guiding the attention of the network and making it more robust to noisy data. We improve the noisy-image classification (NIC) results by significantly large margins, especially at high noise levels, and come close to the fully retrained approaches. Furthermore, as proof of concept, we show that when using our oracle fidelity map we even outperform the fully retrained methods, whether trained on noisy or restored images.
翻译:利用深层学习,图像分类工作有了显著改善。这主要是由于进化神经网络(CNNs)能够从大型数据集中学习丰富的特征提取器。然而,大多数深层学习分类方法都是用清洁图像培训的,在处理噪音图像时,即使采用了恢复前处理步骤,也不够强健。虽然新方法解决这一问题,但它们依赖改良的特征提取器,因此需要进行再培训。我们提议了一种方法,可以适用于美元未加工的分类器。我们的方法利用了与特征提取器内部表述相结合的忠诚地图估计,从而引导了网络的注意力,使其对吵动数据更加有力。我们大大改进了噪音图像分类的结果,特别是在高噪音水平上,并接近完全经过再培训的方法。此外,作为概念的证明,我们证明在使用我们的“忠贞”地图时,我们甚至超越了经过充分再培训的方法,无论是在噪声图像上还是恢复图像上受过训练。