Understanding and explaining the mistakes made by trained models is critical to many machine learning objectives, such as improving robustness, addressing concept drift, and mitigating biases. However, this is often an ad hoc process that involves manually looking at the model's mistakes on many test samples and guessing at the underlying reasons for those incorrect predictions. In this paper, we propose a systematic approach, conceptual counterfactual explanations(CCE), that explains why a classifier makes a mistake on a particular test sample(s) in terms of human-understandable concepts (e.g. this zebra is misclassified as a dog because of faint stripes). We base CCE on two prior ideas: counterfactual explanations and concept activation vectors, and validate our approach on well-known pretrained models, showing that it explains the models' mistakes meaningfully. In addition, for new models trained on data with spurious correlations, CCE accurately identifies the spurious correlation as the cause of model mistakes from a single misclassified test sample. On two challenging medical applications, CCE generated useful insights, confirmed by clinicians, into biases and mistakes the model makes in real-world settings. The code for CCE is publicly available and can easily be applied to explain mistakes in new models.
翻译:理解和解释经过培训的模型的错误对于许多机器学习目标至关重要,例如提高稳健性、处理概念漂移和减少偏差。然而,这往往是一个临时过程,涉及手动检查模型在许多测试样品上的错误,并猜测这些不正确预测的根本原因。在本文中,我们建议一种系统的方法,概念反事实解释(CCE),从而解释为什么一个分类者在人类可理解的概念方面在特定测试样本上犯了错误(例如,这只斑马由于细微条纹条而被误划为狗)。我们CEC基于以前的两个想法:反事实解释和概念激活矢量,并验证我们在众所周知的预先训练模型上的做法,表明它有意义地解释模型的错误。此外,对于经过有虚假关联的数据培训的新模型,CEE准确地指出,在单一分类的测试样本中,错误是典型错误的原因。关于两种具有挑战性的医学应用,CEEE产生有用的洞察力,并得到临床医生的证实,可以很容易地解释在现实世界环境中的模型。