Interpretable brain network models for disease prediction are of great value for the advancement of neuroscience. GNNs are promising to model complicated network data, but they are prone to overfitting and suffer from poor interpretability, which prevents their usage in decision-critical scenarios like healthcare. To bridge this gap, we propose BrainNNExplainer, an interpretable GNN framework for brain network analysis. It is mainly composed of two jointly learned modules: a backbone prediction model that is specifically designed for brain networks and an explanation generator that highlights disease-specific prominent brain network connections. Extensive experimental results with visualizations on two challenging disease prediction datasets demonstrate the unique interpretability and outstanding performance of BrainNNExplainer.
翻译:用于疾病预测的可解释的大脑网络模型对于神经科学的进步具有巨大价值。 GNNs有希望建模复杂的网络数据,但是它们容易被过度配置,而且容易被错误解释,从而无法在保健等决策关键情况下使用。为了缩小这一差距,我们提议BenNONTExplainer,这是一个可用于脑网络分析的可解释GNN框架。它主要由两个共同学习的模块组成:一个是专门针对大脑网络设计的骨干预测模型,另一个是突出特定疾病突出的大脑网络连接的解释生成器。在两个具有挑战性的疾病预测数据集上具有可视化性的广泛实验结果,显示了BenNERExplainer的独特解释性和出色表现。