Due to the difficulties of obtaining multimodal paired images in clinical practice, recent studies propose to train brain tumor segmentation models with unpaired images and capture complementary information through modality translation. However, these models cannot fully exploit the complementary information from different modalities. In this work, we thus present a novel two-step (intra-modality and inter-modality) curriculum disentanglement learning framework to effectively utilize privileged semi-paired images, i.e. limited paired images that are only available in training, for brain tumor segmentation. Specifically, in the first step, we propose to conduct reconstruction and segmentation with augmented intra-modality style-consistent images. In the second step, the model jointly performs reconstruction, unsupervised/supervised translation, and segmentation for both unpaired and paired inter-modality images. A content consistency loss and a supervised translation loss are proposed to leverage complementary information from different modalities in this step. Through these two steps, our method effectively extracts modality-specific style codes describing the attenuation of tissue features and image contrast, and modality-invariant content codes containing anatomical and functional information from the input images. Experiments on three brain tumor segmentation tasks show that our model outperforms competing segmentation models based on unpaired images.
翻译:由于在临床实践中难以获得多式联运配对图像,最近的研究提议在临床实践中培训脑肿瘤分解模型,使用无偏差图像,通过模式翻译获取补充信息;然而,这些模型无法充分利用不同模式的补充信息;因此,在这项工作中,我们提出了一个新颖的两步(现代和现代对齐)课程分解学习框架,以有效利用特许半双面图像,即只有培训才能提供的有限对齐图像,用于脑肿瘤分解。具体地说,在第一步,我们建议进行重组和分解,同时增加内部模式风格一致图像。在第二步,模型联合进行重建,不受监督/监督的翻译,对无差异和配对齐的跨模式图像进行分解。建议采用内容一致性损失和受监督的翻译损失,以利用这一步骤中不同模式的补充信息。通过这两个步骤,我们的方法有效地提取了描述组织特征和图像对比强度特制模式代码,用非模式-差异化图像对比法,并用模型模式-内立式图像分解,用三种功能图解部分显示我们功能性图解的模型部分内容。