Convolutional neural networks (CNN) are known for their excellent feature extraction capabilities to enable the learning of models from data, yet are used as black boxes. An interpretation of the convolutional filtres and associated features can help to establish an understanding of CNN to distinguish various classes. In this work, we focus on the explainability of a CNN model called as cnnexplain that is used for Covid-19 and non-Covid-19 classification with a focus on the interpretability of features by the convolutional filters, and how these features contribute to classification. Specifically, we have used various explainable artificial intelligence (XAI) methods, such as visualizations, SmoothGrad, Grad-CAM, and LIME to provide interpretation of convolutional filtres, and relevant features, and their role in classification. We have analyzed the explanation of these methods for Covid-19 detection using dry cough spectrograms. Explanation results obtained from the LIME, SmoothGrad, and Grad-CAM highlight important features of different spectrograms and their relevance to classification.
翻译:众所周知,革命神经网络(CNN)具有极好的特征提取能力,能够从数据中学习模型,但被作为黑盒使用。对革命过滤器和相关特征的解释有助于建立对CNN的理解,以区分不同类别。在这项工作中,我们侧重于被称作Covid-19和非Covid-19分类的Cnexplain的CNN模型的解释性,重点是革命过滤器对特征的可解释性,以及这些特征如何有助于分类。具体地说,我们使用了各种可解释的人工智能(XAI)方法,例如视觉化、滑动格格、格拉德-卡姆和LIME,以提供对革命过滤器和相关特征的解释,以及它们在分类中的作用。我们分析了使用干咳谱图对Covid-19检测方法的解释性。从LIME、滑格德和格拉德-CAM获得的解释性结果强调了不同谱图的重要特征及其与分类的相关性。