The high complexity of deep learning models is associated with the difficulty of explaining what evidence they recognize as correlating with specific disease labels. This information is critical for building trust in models and finding their biases. Until now, automated deep learning visualization solutions have identified regions of images used by classifiers, but these solutions are too coarse, too noisy, or have a limited representation of the way images can change. We propose a novel method for formulating and presenting spatial explanations of disease evidence, called deformation field interpretation with generative adversarial networks (DeFI-GAN). An adversarially trained generator produces deformation fields that modify images of diseased patients to resemble images of healthy patients. We validate the method studying chronic obstructive pulmonary disease (COPD) evidence in chest x-rays (CXRs) and Alzheimer's disease (AD) evidence in brain MRIs. When extracting disease evidence in longitudinal data, we show compelling results against a baseline producing difference maps. DeFI-GAN also highlights disease biomarkers not found by previous methods and potential biases that may help in investigations of the dataset and of the adopted learning methods.
翻译:深度学习模型的高复杂性与其难以解释其识别哪些证据与特定疾病标签相关有关。这些信息对于建立模型的信任和发现它们的偏见至关重要。直到现在,自动化的深度学习可视化解决方案已经识别了分类器所使用的图像区域,但这些解决方案过于粗糙、噪声过大或具有图像改变的有限表示形式。本文提出了一种新的方法,即通过生成对抗网络(GAN)变形场进行疾病证据的空间解释,称为DeFI-GAN。经过对抗性训练后,生成器产生变形场,修改患者疾病图像以使其类似于健康对照组图像。我们验证了这种方法,对胸部X射线(CXRs)中慢性阻塞性肺病(COPD)证据和脑部MRI中阿尔茨海默病(AD)证据进行了研究。当从纵向数据中提取疾病证据时,我们与使用差异图生成的基线相比展示了令人信服的结果。DeFI-GAN还突出了之前的方法未发现的疾病生物标志物和可能有助于调查数据集和采用的学习方法的偏见。