Automated medical image segmentation using deep neural networks typically requires substantial supervised training. However, these models fail to generalize well across different imaging modalities. This shortcoming, amplified by the limited availability of annotated data, has been hampering the deployment of such methods at a larger scale across modalities. To address these issues, we propose M-GenSeg, a new semi-supervised training strategy for accurate cross-modality tumor segmentation on unpaired bi-modal datasets. Based on image-level labels, a first unsupervised objective encourages the model to perform diseased to healthy translation by disentangling tumors from the background, which encompasses the segmentation task. Then, teaching the model to translate between image modalities enables the synthesis of target images from a source modality, thus leveraging the pixel-level annotations from the source modality to enforce generalization to the target modality images. We evaluated the performance on a brain tumor segmentation datasets composed of four different contrast sequences from the public BraTS 2020 challenge dataset. We report consistent improvement in Dice scores on both source and unannotated target modalities. On all twelve distinct domain adaptation experiments, the proposed model shows a clear improvement over state-of-the-art domain-adaptive baselines, with absolute Dice gains on the target modality reaching 0.15.
翻译:使用深神经网络的自动医疗图象分解通常需要大量的监管培训。然而,这些模型未能在不同的成像模式中广泛推广。这一缺陷因附加说明的数据有限而加剧,一直妨碍在各种模式中大规模部署此类方法。为了解决这些问题,我们提议采用新的半监督培训战略M-GenSeg,即对未受控制双式数据集进行准确的跨现代肿瘤分解的半监督培训战略。根据图像等级标签,第一个未受监督的目标鼓励模型通过将肿瘤从背景中分离出来,进行疾病转化为健康的转化,包括分解任务。然后,在图像模式中教授模型,使来自源模式的目标图象合成,从而利用源模式的像素级说明,对目标模式图象进行总体化。我们评估了脑肿瘤分解数据集的性能,该数据集由公共BRATS2020挑战数据集的四种不同的对比序列组成。我们报告,Dice在模型源和未加涂色肿瘤目标模型的分解分解分解结果方面不断改进。在12个域基准模型上进行了明确的调整。