Traditional methods of identifying pathologies in X-ray images rely heavily on skilled human interpretation and are often time-consuming. The advent of deep learning techniques has enabled the development of automated disease diagnosis systems, but the performance of such systems is dependent on the quality of the model and the level of interpretability it provides. In this paper, we propose a multi-label disease diagnosis model for chest X-rays using a dense convolutional neural network (DenseNet) and model interpretability using GRADCAM. We trained our model using frontal X-rays and evaluated its performance using various quantitative metrics, including the area under the receiver operating characteristic curve (AUC). Our proposed model achieved the highest AUC score of 0.896 for the condition Cardiomegaly with an accuracy of 0.826, while the lowest AUC score was obtained for Nodule, at 0.655 with an accuracy of 0.66. To promote model interpretability and build trust in decision making, we generated heatmaps on X-rays to visualize the regions where the model paid attention to make certain predictions. Additionally, we estimated the uncertainty in model predictions by presenting the confidence interval of our measurements. Our proposed automated disease diagnosis model obtained high performance metrics in multi-label disease diagnosis tasks and provided visualization of model predictions for model interpretability.
翻译:暂无翻译