The automatic early diagnosis of prodromal stages of Alzheimer's disease is of great relevance for patient treatment to improve quality of life. We address this problem as a multi-modal classification task. Multi-modal data provides richer and complementary information. However, existing techniques only consider either lower order relations between the data and single/multi-modal imaging data. In this work, we introduce a novel semi-supervised hypergraph learning framework for Alzheimer's disease diagnosis. Our framework allows for higher-order relations among multi-modal imaging and non-imaging data whilst requiring a tiny labelled set. Firstly, we introduce a dual embedding strategy for constructing a robust hypergraph that preserves the data semantics. We achieve this by enforcing perturbation invariance at the image and graph levels using a contrastive based mechanism. Secondly, we present a dynamically adjusted hypergraph diffusion model, via a semi-explicit flow, to improve the predictive uncertainty. We demonstrate, through our experiments, that our framework is able to outperform current techniques for Alzheimer's disease diagnosis.
翻译:阿尔茨海默氏病原发性阶段的自动早期诊断对病人治疗具有极大的相关性,以提高生活质量。我们将此问题作为一个多模式分类任务来处理。多模式数据提供更丰富和互补的信息。然而,现有技术只考虑数据与单一/多模式成像数据之间的较低顺序关系。在这项工作中,我们为阿尔茨海默氏病诊断引入了一个新的半监督的超光谱学习框架。我们的框架允许多模式成像和非成像数据之间的更高顺序关系,同时需要微小的标签数据集。首先,我们引入了一种双向嵌入战略,以构建一个保护数据语义学的强大超高光谱。我们通过使用对比基机制在图像和图像层面实施扰动变异性来实现这一目标。第二,我们提出了一个动态调整的超光谱传播模型,通过半显性流来改善预测性不确定性。我们通过实验证明,我们的框架能够超越当前对阿尔茨海默氏病诊断的技术。