Interpretability in Graph Convolutional Networks (GCNs) has been explored to some extent in computer vision in general, yet, in the medical domain, it requires further examination. Moreover, most of the interpretability approaches for GCNs, especially in the medical domain, focus on interpreting the model in a post hoc fashion. In this paper, we propose an interpretable graph learning-based model which 1) interprets the clinical relevance of the input features towards the task, 2) uses the explanation to improve the model performance and, 3) learns a population level latent graph that may be used to interpret the cohort's behavior. In a clinical scenario, such a model can assist the clinical experts in better decision-making for diagnosis and treatment planning. The main novelty lies in the interpretable attention module (IAM), which directly operates on multi-modal features. Our IAM learns the attention for each feature based on the unique interpretability-specific losses. We show the application on two publicly available datasets, Tadpole and UKBB, for three tasks of disease, age, and gender prediction. Our proposed model shows superior performance with respect to compared methods with an increase in an average accuracy of 3.2% for Tadpole, 1.6% for UKBB Gender, and 2% for the UKBB Age prediction task. Further, we show exhaustive validation and clinical interpretation of our results.
翻译:图表革命网络(GCN)的可解释性在一般计算机视野中得到了某种程度的探讨,但在医学领域,它需要进一步研究。此外,对于GCN, 特别是在医疗领域的可解释性方法,大多数对GCN的可解释性方法都侧重于以临时后的方式解释模型。在本文中,我们提出了一个可解释的图表学习模型,其中1)解释了投入特性对任务具有的临床相关性,2)利用该解释来改进模型性能,3)学习了可用于解释该群体行为的人口水平潜值图。在临床假设中,这种模型可以帮助临床专家更好地作出诊断和治疗规划的决策。主要的新颖之处在于可解释性关注模块(IMA),该模块直接以多模式特性运作。我们IAM根据独特的可解释性损失来了解对每个特征的关注。我们用两个公开的数据集,即Tadpole和UKBB,用于三项疾病、年龄和性别预测任务。我们提议的模型展示了在临床预测中优异性性性性性性业绩,我们用BBB的比性指标为1.2%,我们用于BBBB的平均任务。