White Blood Cell (WBC) Leukaemia is detected through image-based classification. Convolutional Neural Networks are used to learn the features needed to classify images of cells a malignant or healthy. However, this type of model requires learning a large number of parameters and is difficult to interpret and explain. Explainable AI (XAI) attempts to alleviate this issue by providing insights to how models make decisions. Therefore, we present an XAI model which uses only 24 explainable and interpretable features and is highly competitive to other approaches by outperforming them by about 4.38\%. Further, our approach provides insight into which variables are the most important for the classification of the cells. This insight provides evidence that when labs treat the WBCs differently, the importance of various metrics changes substantially. Understanding the important features for classification is vital in medical imaging diagnosis and, by extension, understanding the AI models built in scientific pursuits.
翻译:白血球(WBC)白血球(白血球)通过图像分类检测出白血病。 进化神经网络被用于学习对恶性或健康细胞图像进行分类所需的特征。 但是,这种模型需要学习大量参数,而且难以解释和解释。 解释性AI(XAI)试图通过提供模型决策的洞察力来缓解这一问题。 因此,我们提出了一个 XAI模型,它仅使用24个可解释和可解释的特征,而且与其他方法相比具有高度竞争力,用大约4.38 ⁇ 。 此外,我们的方法提供了对哪些变量是细胞分类的最重要因素的洞察力。 这种洞察力提供了以下证据:当实验室对WBC进行不同处理时,各种指标变化的重要性很大。 了解分类的重要特征对于医学成像诊断至关重要,并且通过扩展理解科学追求中构建的AI模型。