The ability to explain the prediction of deep learning models to end-users is an important feature to leverage the power of artificial intelligence (AI) for the medical decision-making process, which is usually considered non-transparent and challenging to comprehend. In this paper, we apply state-of-the-art eXplainable artificial intelligence (XAI) methods to explain the prediction of the black-box AI models in the thyroid nodule diagnosis application. We propose new statistic-based XAI methods, namely Kernel Density Estimation and Density map, to explain the case of no nodule detected. XAI methods' performances are considered under a qualitative and quantitative comparison as feedback to improve the data quality and the model performance. Finally, we survey to assess doctors' and patients' trust in XAI explanations of the model's decisions on thyroid nodule images.
翻译:向最终用户解释深层次学习模型预测的能力是利用人工智能(AI)的力量进行医疗决策过程的一个重要特点,而医学决策过程通常被认为不透明,难以理解。在本文件中,我们采用了最先进的电子X可移植人工智能(XAI)方法来解释甲状腺结核诊断应用中黑盒AI模型的预测。我们提出了新的基于统计数据的XAI方法,即核心密度估计和密度图,以解释未发现结核的情况。根据定性和定量比较,XAI方法的性能被视为改进数据质量和模型性能的反馈。最后,我们调查以评估医生和病人对XAI解释模型关于甲状腺结核图象的决定的信任。</s>