We propose a novel teacher-student model for semi-supervised multi-organ segmentation. In teacher-student model, data augmentation is usually adopted on unlabeled data to regularize the consistent training between teacher and student. We start from a key perspective that fixed relative locations and variable sizes of different organs can provide distribution information where a multi-organ CT scan is drawn. Thus, we treat the prior anatomy as a strong tool to guide the data augmentation and reduce the mismatch between labeled and unlabeled images for semi-supervised learning. More specifically, we propose a data augmentation strategy based on partition-and-recovery N$^3$ cubes cross- and within- labeled and unlabeled images. Our strategy encourages unlabeled images to learn organ semantics in relative locations from the labeled images (cross-branch) and enhances the learning ability for small organs (within-branch). For within-branch, we further propose to refine the quality of pseudo labels by blending the learned representations from small cubes to incorporate local attributes. Our method is termed as MagicNet, since it treats the CT volume as a magic-cube and $N^3$-cube partition-and-recovery process matches with the rule of playing a magic-cube. Extensive experiments on two public CT multi-organ datasets demonstrate the effectiveness of MagicNet, and noticeably outperforms state-of-the-art semi-supervised medical image segmentation approaches, with +7% DSC improvement on MACT dataset with 10% labeled images.
翻译:我们为半监督多机分解提出了新型师生模型。 在教师-学生模型中,数据增强通常在无标签数据的基础上采用,以规范师生之间的一致培训。我们从一个关键角度出发,固定相对位置和不同器官的变量大小可以提供多机CT扫描的分布信息。因此,我们把以前的解剖作为一个强大的工具,用以指导数据增强,并减少标签和未标签的半监督多机分解图像之间的不匹配。更具体地说,我们建议采用基于分区和回收N3美元立方体的增强数据战略,以规范师生和学生之间的一致培训。我们的战略鼓励从标签图像(交叉列)的相对位置和不同器官的变量大小提供分布信息。因此,我们把先前的解剖作为一个强有力的工具,用以指导数据增强数据扩增和半导出半导体的图像。我们的方法被称作魔术网,因为它把磁力网和内部分解的立体图像校正的缩缩缩缩图案与磁带的缩缩缩缩图卷,把磁带中的缩缩缩缩缩缩缩缩缩缩缩校正的缩图解与缩缩缩缩缩缩缩缩缩缩图卷列列成成。