In a complex disease such as tuberculosis, the evidence for the disease and its evolution may be present in multiple modalities such as clinical, genomic, or imaging data. Effective patient-tailored outcome prediction and therapeutic guidance will require fusing evidence from these modalities. Such multimodal fusion is difficult since the evidence for the disease may not be uniform across all modalities, not all modality features may be relevant, or not all modalities may be present for all patients. All these nuances make simple methods of early, late, or intermediate fusion of features inadequate for outcome prediction. In this paper, we present a novel fusion framework using multiplexed graphs and derive a new graph neural network for learning from such graphs. Specifically, the framework allows modalities to be represented through their targeted encodings, and models their relationship explicitly via multiplexed graphs derived from salient features in a combined latent space. We present results that show that our proposed method outperforms state-of-the-art methods of fusing modalities for multi-outcome prediction on a large Tuberculosis (TB) dataset.
翻译:在结核病等复杂疾病中,疾病及其演变的证据可能存在于临床、基因组学或成像数据等多种模式中。有效的病人定制结果预测和治疗指导需要从这些模式中找到阻断证据。这种多式联运是困难的,因为有关疾病的证据可能并非在所有模式中都一致,并非所有模式特征都具有相关性,或并非所有模式都可能存在,所有这些细微差别使早期、晚期或中期的特征融合方法变得不适宜于结果预测。在本文中,我们使用多氧化图表提出了一个新的聚合框架,并开发出一个新的图表神经网络,以便从这些图表中学习。具体地说,该框架允许通过这些模式通过其目标编码进行表述,并通过从综合潜在空间的显著特征中得出的多氧化图来明确模拟其关系。我们介绍的结果表明,我们所提议的方法比大型结核病(结核病)数据集的多种结果预测采用的最新方法要强。