Image classifiers are typically scored on their test set accuracy, but high accuracy can mask a subtle type of model failure. We find that high scoring convolutional neural networks (CNNs) on popular benchmarks exhibit troubling pathologies that allow them to display high accuracy even in the absence of semantically salient features. When a model provides a high-confidence decision without salient supporting input features, we say the classifier has overinterpreted its input, finding too much class-evidence in patterns that appear nonsensical to humans. Here, we demonstrate that neural networks trained on CIFAR-10 and ImageNet suffer from overinterpretation, and we find models on CIFAR-10 make confident predictions even when 95% of input images are masked and humans cannot discern salient features in the remaining pixel-subsets. We introduce Batched Gradient SIS, a new method for discovering sufficient input subsets for complex datasets, and use this method to show the sufficiency of border pixels in ImageNet for training and testing. Although these patterns portend potential model fragility in real-world deployment, they are in fact valid statistical patterns of the benchmark that alone suffice to attain high test accuracy. Unlike adversarial examples, overinterpretation relies upon unmodified image pixels. We find ensembling and input dropout can each help mitigate overinterpretation.
翻译:图像分类通常根据测试数据集的准确性进行评分,但高精度可以掩盖一种微妙的模型失败类型。我们发现,在流行基准基准上高得分的神经神经网络(CNNs)显示出令人不安的病理,即使缺乏语义特征,它们也能显示高精度的精确性。当模型提供一个没有显著支持输入特征的高度自信决定时,我们说,分类者过度解释了其输入,发现在似乎对人类不敏感的模式中存在过多的阶级证据。在这里,我们证明在CIFAR-10和图像网络上培训的神经网络存在过度解释的问题,我们在CIFAR-10上发现模型模型显示即使95%的输入图像被遮掩,而人类无法辨别其余像素子的突出特征,它们也会出现自信的预测。我们引入了Batched Gradient SIS,这是为复杂数据集发现足够输入子集的新方法,并且使用这种方法来显示图像网络中边境像素的充足性像素进行培训和测试。尽管这些模式预示着真实性模型在现实世界部署中的弱点的脆弱性,但我们只能以更高的统计模式来测量。