A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems such as those of medical diagnosis. In this paper, we analyze a case study on skin lesion images where we customize an existing XAI approach for explaining a deep learning model able to recognize different types of skin lesions. The explanation is formed by synthetic exemplar and counter-exemplar images of skin lesion and offers the practitioner a way to highlight the crucial traits responsible for the classification decision. A survey conducted with domain experts, beginners and unskilled people proof that the usage of explanations increases the trust and confidence in the automatic decision system. Also, an analysis of the latent space adopted by the explainer unveils that some of the most frequent skin lesion classes are distinctly separated. This phenomenon could derive from the intrinsic characteristics of each class and, hopefully, can provide support in the resolution of the most frequent misclassifications by human experts.
翻译:在医学诊断等关键情况下的一个关键问题是,在决策系统中采用的深层次学习模式的可解释性。在可移植人工智能(XAI)的研究正试图解决这一问题。然而,XAI方法往往只对普通分类师进行测试,并不代表像医学诊断这样的现实问题。在本文中,我们分析了一项关于皮肤损伤图像的案例研究,我们通过定制现有的XAI方法来解释一种能够识别不同类型皮肤损伤的深层次学习模式。解释由皮肤损伤的合成外观和反外观图像形成,为从业人员提供了一个强调负责分类决定的关键特征的方法。与域名专家、初学者和非熟练人员进行的一项调查证明,使用解释增加了对自动决策系统的信任和信心。此外,对解释者采用的潜在空间的分析揭示出,某些最常见的皮肤损伤类别是截然不同的。这种现象可以来自每一类的内在特征,并有望为人类专家最经常发生的错误分类的解决方案提供支持。