Deploying reliable deep learning techniques in interdisciplinary applications needs learned models to output accurate and ({even more importantly}) explainable predictions. Existing approaches typically explicate network outputs in a post-hoc fashion, under an implicit assumption that faithful explanations come from accurate predictions/classifications. We have an opposite claim that explanations boost (or even determine) classification. That is, end-to-end learning of explanation factors to augment discriminative representation extraction could be a more intuitive strategy to inversely assure fine-grained explainability, e.g., in those neuroimaging and neuroscience studies with high-dimensional data containing noisy, redundant, and task-irrelevant information. In this paper, we propose such an explainable geometric deep network dubbed as NeuroExplainer, with applications to uncover altered infant cortical development patterns associated with preterm birth. Given fundamental cortical attributes as network input, our NeuroExplainer adopts a hierarchical attention-decoding framework to learn fine-grained attentions and respective discriminative representations to accurately recognize preterm infants from term-born infants at term-equivalent age. NeuroExplainer learns the hierarchical attention-decoding modules under subject-level weak supervision coupled with targeted regularizers deduced from domain knowledge regarding brain development. These prior-guided constraints implicitly maximizes the explainability metrics (i.e., fidelity, sparsity, and stability) in network training, driving the learned network to output detailed explanations and accurate classifications. Experimental results on the public dHCP benchmark suggest that NeuroExplainer led to quantitatively reliable explanation results that are qualitatively consistent with representative neuroimaging studies.
翻译:在跨学科应用中部署可靠的深层次学习技术,需要为产出准确和(甚至更重要的)解释性预测而学习模型。现有方法通常以热后方式解释网络产出,其隐含的假设是准确的预测/分类提供忠实的解释。我们有一个相反的说法,即解释会促进(甚至确定)与预产有关的改变婴儿皮质发展模式。这就是,从端到端了解解释因素以增加歧视性代表性的提取,可以是一种更直观的战略,逆向地确保微微分CP可解释性,例如,在神经成像和神经科学研究中,以包含噪音、冗余和任务相关信息的高度数据来解释网络产出。在本文中,我们提出了这样一个可以解释的深深层次地理网络可解释性网络,其应用是发现与预产相关的改变婴儿皮质发展模式。鉴于基本皮质特征是网络输入,我们的内分层分析员采用了一个分级分级分解框架,以学习精细的注意力和各自的分析性陈述,以准确识别产期生婴儿的精度、冗余和与任务相关的信息解释性解释性解释性解释性解释结果,在正变变的轨道上,在正变的轨道上,研究研究中,在前的深度分析研究中, 实验研究中,以正变变的深度研究将研究将研究将研究以正变的深度研究这些研究,在先变的深度研究,在先变的深度研究中,在先变的精度研究,在前的深度研究中,在先变的深度研究中,在先变的精度研究,在先变的精度研究中,在前的深度研究中,在前的内的轨道的轨道的轨道的轨道的精度研究,在进进进的轨道的轨道上,在先变的轨道上,在前的精度研究,在前的精度研究,在前的精度上,在前的精度研究,在进的精度研究将的精度研究将的精度研究将的精度上,在前的精度研究将的精度上,在前的精度研究将的精度研究,在前的精度研究,在前的精度上,在前的精度研究中,在前的精度研究,在前的精度研究,在前的精度上,在</s>