Due to its complexity, graph learning-based multi-modal integration and classification is one of the most challenging obstacles for disease prediction. To effectively offset the negative impact between modalities in the process of multi-modal integration and extract heterogeneous information from graphs, we propose a novel method called MMKGL (Multi-modal Multi-Kernel Graph Learning). For the problem of negative impact between modalities, we propose a multi-modal graph embedding module to construct a multi-modal graph. Different from conventional methods that manually construct static graphs for all modalities, each modality generates a separate graph by adaptive learning, where a function graph and a supervision graph are introduced for optimization during the multi-graph fusion embedding process. We then propose a multi-kernel graph learning module to extract heterogeneous information from the multi-modal graph. The information in the multi-modal graph at different levels is aggregated by convolutional kernels with different receptive field sizes, followed by generating a cross-kernel discovery tensor for disease prediction. Our method is evaluated on the benchmark Autism Brain Imaging Data Exchange (ABIDE) dataset and outperforms the state-of-the-art methods. In addition, discriminative brain regions associated with autism are identified by our model, providing guidance for the study of autism pathology.
翻译:由于其复杂性,基于图学习的多模态融合和分类是疾病预测中最具挑战性的障碍之一。为了有效地抵消模态之间的负面影响并从图中提取异质信息,我们提出了一种新方法称为MMKGL(多模态多核图学习)。针对模态之间的负面影响问题,我们提出了一个多模态图嵌入模块来构建多模态图。不同于传统方法手动构造所有模态的静态图,每个模态通过自适应学习生成一个单独的图,在多图融合嵌入过程中进行优化时引入了一个函数图和监督图。然后,我们提出了一个多核图学习模块来从多模态图中提取异质信息。通过具有不同接受场大小的卷积核聚合不同级别的多模态图中的信息,随后生成用于疾病预测的跨核发现张量。我们的方法在基准自闭症脑成像数据交换(ABIDE)数据集上进行评估,优于现有最先进的方法。此外,我们的模型确定了与自闭症相关的区别性大脑区域,为研究自闭症病理学提供了指导。