Liver cancer is one of the most common cancers worldwide. Due to inconspicuous texture changes of liver tumor, contrast-enhanced computed tomography (CT) imaging is effective for the diagnosis of liver cancer. In this paper, we focus on improving automated liver tumor segmentation by integrating multi-modal CT images. To this end, we propose a novel mutual learning (ML) strategy for effective and robust multi-modal liver tumor segmentation. Different from existing multi-modal methods that fuse information from different modalities by a single model, with ML, an ensemble of modality-specific models learn collaboratively and teach each other to distill both the characteristics and the commonality between high-level representations of different modalities. The proposed ML not only enables the superiority for multi-modal learning but can also handle missing modalities by transferring knowledge from existing modalities to missing ones. Additionally, we present a modality-aware (MA) module, where the modality-specific models are interconnected and calibrated with attention weights for adaptive information exchange. The proposed modality-aware mutual learning (MAML) method achieves promising results for liver tumor segmentation on a large-scale clinical dataset. Moreover, we show the efficacy and robustness of MAML for handling missing modalities on both the liver tumor and public brain tumor (BRATS 2018) datasets. Our code is available at https://github.com/YaoZhang93/MAML.
翻译:肝癌是全世界最常见的癌症之一。由于肝肿瘤的不明显质变,对比强化计算断层成像(CT)成像对肝癌的诊断有效。在本文件中,我们侧重于通过整合多模式CT图像来改善自动肝肿瘤分化。为此,我们提出一个新的相互学习(ML)战略,以有效和稳健的多模式肝肿瘤分解。不同于现有的多种模式方法,这些方法通过单一模式将不同模式的信息与不同模式的信息结合在一起,而ML是特定模式模型的共通体,相互学习,并相互教导对方如何淡化不同模式的高层次表现的特征和共性。拟议的ML不仅能够使多模式的CT图像相结合,而且能够通过将现有模式的知识转让给缺失的模式来处理缺失的模式。此外,我们介绍了一个模式-觉悟(MA)模块,其中具体模式模型与适应性信息交流的重心重结合。拟议模式-意识模型的相互学习方法(MAML)共同学习了不同模式的特征和共性特征,同时淡化了不同模式的特性特征。 IMLA系统运行了大规模磁带数据。