Recent advances in machine learning and prevalence of digital medical images have opened up an opportunity to address the challenging brain tumor segmentation (BTS) task by using deep convolutional neural networks. However, different from the RGB image data that are very widespread, the medical image data used in brain tumor segmentation are relatively scarce in terms of the data scale but contain the richer information in terms of the modality property. To this end, this paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data. The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale. The proposed cross-modality deep feature learning framework consists of two learning processes: the cross-modality feature transition (CMFT) process and the cross-modality feature fusion (CMFF) process, which aims at learning rich feature representations by transiting knowledge across different modality data and fusing knowledge from different modality data, respectively. Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance when compared with the baseline methods and state-of-the-art methods.
翻译:机器学习的最新进展和数字医疗图像的普及情况为利用深层进化神经网络解决具有挑战性的脑肿瘤分化任务提供了机会,然而,与RGB非常广泛的图像数据不同,脑肿瘤分化中使用的医疗图像数据在数据规模方面相对较少,但包含模式属性方面较丰富的信息。为此,本文件提议建立一个新的跨模式深度特征学习框架,将多种模式MRI数据中的脑肿瘤分解。核心思想是利用多模式数据中丰富的模式来弥补数据规模不足。拟议的跨模式深度特征学习框架由两个学习进程组成:跨模式特征转型进程和跨模式特征融合进程,其目的在于通过将不同模式数据的知识传递到不同的模式数据中,并分别利用不同模式数据的知识来学习丰富的特征表现。在BRATS基准上进行了全面实验,表明拟议的跨模式深度特征学习框架在与基准方法比较时,可以有效地改进大脑分化的基本方法。