We are witnessing a widespread adoption of artificial intelligence in healthcare. However, most of the advancements in deep learning (DL) in this area consider only unimodal data, neglecting other modalities. Their multimodal interpretation necessary for supporting diagnosis, prognosis and treatment decisions. In this work we present a deep architecture, explainable by design, which jointly learns modality reconstructions and sample classifications using tabular and imaging data. The explanation of the decision taken is computed by applying a latent shift that, simulates a counterfactual prediction revealing the features of each modality that contribute the most to the decision and a quantitative score indicating the modality importance. We validate our approach in the context of COVID-19 pandemic using the AIforCOVID dataset, which contains multimodal data for the early identification of patients at risk of severe outcome. The results show that the proposed method provides meaningful explanations without degrading the classification performance.
翻译:我们目睹了在保健领域广泛采用人工智能的情况,然而,在这一领域的深层次学习(DL)进展大多只考虑单式数据,忽视了其他模式,支持诊断、预测和治疗决定所需的多式解释。在这项工作中,我们提出了一个深层次的结构,可以通过设计加以解释,共同学习模式的重建和抽样分类,使用表格和成像数据进行。对所作决定的解释是通过应用潜在变化来计算的,这种潜在变化模拟了反事实预测,揭示了对决定贡献最大的每一种模式的特点,并用数量分来表明模式的重要性。我们用AIforCOVID数据集验证了我们在COVID-19大流行情况下的做法,该数据集包含多种数据,用于早期识别有严重结果风险的病人。结果显示,拟议方法提供了有意义的解释,而不会降低分类的性能。