Traditional evaluation metrics for learned models that report aggregate scores over a test set are insufficient for surfacing important and informative patterns of failure over features and instances. We introduce and study a method aimed at characterizing and explaining failures by identifying visual attributes whose presence or absence results in poor performance. In distinction to previous work that relies upon crowdsourced labels for visual attributes, we leverage the representation of a separate robust model to extract interpretable features and then harness these features to identify failure modes. We further propose a visualization method aimed at enabling humans to understand the meaning encoded in such features and we test the comprehensibility of the features. An evaluation of the methods on the ImageNet dataset demonstrates that: (i) the proposed workflow is effective for discovering important failure modes, (ii) the visualization techniques help humans to understand the extracted features, and (iii) the extracted insights can assist engineers with error analysis and debugging.
翻译:我们引入并研究一种方法,旨在通过识别存在或缺席导致性能不良的视觉特征来描述和解释失败。 与以往依靠众包标签进行视觉属性的工作不同,我们利用一个单独的强效模型来提取可解释的特征,然后利用这些特征来识别失败模式。我们进一步提出一种可视化方法,旨在使人类能够理解这些特征所编码的含义,我们测试这些特征的可理解性。对图像网络数据集中的方法的评估表明:(一) 拟议的工作流程对于发现重要的失败模式是有效的,(二) 可视化技术有助于人类理解所提取的特征,以及(三) 所提取的洞见有助于工程师进行错误分析和调试。