Multivariate dynamical processes can often be intuitively described by a weighted connectivity graph between components representing each individual time-series. Even a simple representation of this graph as a Pearson correlation matrix may be informative and predictive as demonstrated in the brain imaging literature. However, there is a consensus expectation that powerful graph neural networks (GNNs) should perform better in similar settings. In this work, we present a model that is considerably shallow than deep GNNs, yet outperforms them in predictive accuracy in a brain imaging application. Our model learns the autoregressive structure of individual time series and estimates directed connectivity graphs between the learned representations via a self-attention mechanism in an end-to-end fashion. The supervised training of the model as a classifier between patients and controls results in a model that generates directed connectivity graphs and highlights the components of the time-series that are predictive for each subject. We demonstrate our results on a functional neuroimaging dataset classifying schizophrenia patients and controls.
翻译:代表每个时间序列的组件之间的加权连通性图往往可以直观地描述多变量动态过程。即使简单地表示这个图形为Pearson 相关矩阵,也可能如大脑成像文献所示,具有信息性和预测性。然而,人们一致认为,强大的图形神经网络(GNN)在类似环境下应发挥更好的作用。在这项工作中,我们展示的模型比深层GNNs要浅得多,但在脑成像应用中的预测准确性却比它们高得多。我们的模型通过端至端自留机制学习了单个时间序列的自动反向结构,并估算了在学习过的演示之间的直接连接性图。对模型作为病人和控制者之间的分类的监督下培训的结果是产生定向连通图形的模型,并突出每个主题预测的时间序列的组成部分。我们展示了对精神分裂病人和控制进行分类的功能性神经成形数据集的结果。