The interpretation of deep neural networks (DNNs) has become a key topic as more and more people apply them to solve various problems and making critical decisions. Concept-based explanations have recently become a popular approach for post-hoc interpretation of DNNs. However, identifying human-understandable visual concepts that affect model decisions is a challenging task that is not easily addressed with automatic approaches. We present a novel human-in-the-loop approach to generate user-defined concepts for model interpretation and diagnostics. Central to our proposal is the use of active learning, where human knowledge and feedback are combined to train a concept extractor with very little human labeling effort. We integrate this process into an interactive system, ConceptExtract. Through two case studies, we show how our approach helps analyze model behavior and extract human-friendly concepts for different machine learning tasks and datasets and how to use these concepts to understand the predictions, compare model performance and make suggestions for model refinement. Quantitative experiments show that our active learning approach can accurately extract meaningful visual concepts. More importantly, by identifying visual concepts that negatively affect model performance, we develop the corresponding data augmentation strategy that consistently improves model performance.
翻译:对深神经网络(DNNs)的解释已成为一个关键议题,因为越来越多的人运用这些解释来解决各种问题和作出关键决定。基于概念的解释最近已成为对DNS的热后解释的流行方法。然而,确定影响模型决定的人类可理解的视觉概念是一项挑战性任务,不容易用自动方法加以解决。我们提出了一个新的“人到行间”方法,为模型解释和诊断生成用户定义的概念。我们提案的核心是使用积极学习,将人类知识和反馈结合起来,以培养概念提取器,而人类的标签努力很少。我们通过两个案例研究,将这一过程纳入互动系统“概念摘要”。我们展示了我们的方法如何帮助分析模型行为和为不同的机器学习任务和数据集提取对人友好的概念,以及如何利用这些概念理解预测、比较模型性能和为模型改进提出建议。定量实验表明,我们的积极学习方法可以准确地提取有意义的视觉概念。更重要的是,通过确定对模型性能产生不利影响的视觉概念,我们制定了相应的数据扩充战略,不断改进模型性能。