Learning with multiple modalities is crucial for automated brain tumor segmentation from magnetic resonance imaging data. Explicitly optimizing the common information shared among all modalities (e.g., by maximizing the total correlation) has been shown to achieve better feature representations and thus enhance the segmentation performance. However, existing approaches are oblivious to partial common information shared by subsets of the modalities. In this paper, we show that identifying such partial common information can significantly boost the discriminative power of image segmentation models. In particular, we introduce a novel concept of partial common information mask (PCI-mask) to provide a fine-grained characterization of what partial common information is shared by which subsets of the modalities. By solving a masked correlation maximization and simultaneously learning an optimal PCI-mask, we identify the latent microstructure of partial common information and leverage it in a self-attention module to selectively weight different feature representations in multi-modal data. We implement our proposed framework on the standard U-Net. Our experimental results on the Multi-modal Brain Tumor Segmentation Challenge (BraTS) datasets consistently outperform those of state-of-the-art segmentation baselines, with validation Dice similarity coefficients of 0.920, 0.897, 0.837 for the whole tumor, tumor core, and enhancing tumor on BraTS-2020.
翻译:以多种模式进行学习对于磁共振成像数据中的自动脑肿瘤分离至关重要。 明确优化所有模式共享的共同信息(例如,最大限度地扩大总体相关性)已经显示能够实现更好的特征表达方式,从而增强分化性性能。 但是,现有方法忽略了模式子集共享的部分共同信息。在本文件中,我们表明,确定这种部分共同信息可以大大增强图像分化模型的歧视性力量。特别是,我们引入了部分共同信息掩码(PCI-mask)的新概念,以提供对部分共同信息进行精确的描述,从而提供这些模式子集共享的部分共同信息(例如,最大限度地扩大总体相关性)的精确描述。通过解决隐藏的关联最大化并同时学习最佳的 PCI-mask,我们确定部分共同信息的潜在微观结构,并在一个自留模块中加以利用,有选择地加权多模式数据中的不同特征表达方式。我们实施了标准的 U-Net 标准框架。我们在多式脑透析面图解挑战(BRATS)上得出的实验结果, 以持续超过0.37 的0.18-SOM核心部分数据基数,用于不断增强0.97的0.19的0.19的状态的状态的基级校准。