Deep Neural Networks (DNNs) have an enormous potential to learn from complex biomedical data. In particular, DNNs have been used to seamlessly fuse heterogeneous information from neuroanatomy, genetics, biomarkers, and neuropsychological tests for highly accurate Alzheimer's disease diagnosis. On the other hand, their black-box nature is still a barrier for the adoption of such a system in the clinic, where interpretability is absolutely essential. We propose Shapley Value Explanation of Heterogeneous Neural Networks (SVEHNN) for explaining the Alzheimer's diagnosis made by a DNN from the 3D point cloud of the neuroanatomy and tabular biomarkers. Our explanations are based on the Shapley value, which is the unique method that satisfies all fundamental axioms for local explanations previously established in the literature. Thus, SVEHNN has many desirable characteristics that previous work on interpretability for medical decision making is lacking. To avoid the exponential time complexity of the Shapley value, we propose to transform a given DNN into a Lightweight Probabilistic Deep Network without re-training, thus achieving a complexity only quadratic in the number of features. In our experiments on synthetic and real data, we show that we can closely approximate the exact Shapley value with a dramatically reduced runtime and can reveal the hidden knowledge the network has learned from the data.
翻译:深心神经网络(DNNs)具有从复杂的生物医学数据中学习的巨大潜力。 特别是, DNNS被用于无缝地结合神经切除术、遗传学、生物标志学和神经心理测试的多种信息,以进行非常准确的阿尔茨海默氏病诊断。 另一方面, 他们的黑箱性质仍然是在诊所采用这种系统的障碍, 在那里, 解释是绝对必要的。 我们提议对异性神经网络( SVEHNN) 的“ 损耗值解释 ” 来解释由DNNE从神经剖析学和表型生物标志3D点云层中所作的老年痴呆滞症诊断。 我们的解释基于“ 沙普利值”, 这是一种独特的方法, 能够满足以前在文献中确立的地方解释的所有基本轴值。 因此, SVEHNNN具有许多可取的特性, 以前关于医疗决策可解释性的工作是缺乏的。 为了避免剧变时间的复杂性, 我们提议将给DNNNE的“ 轻度深心洞” 深视网络, 不进行再训练, 我们只能用精确的合成模型的实验, 显示我们所了解的精密数据。