Automatic methods to segment the vestibular schwannoma (VS) tumors and the cochlea from magnetic resonance imaging (MRI) are critical to VS treatment planning. Although supervised methods have achieved satisfactory performance in VS segmentation, they require full annotations by experts, which is laborious and time-consuming. In this work, we aim to tackle the VS and cochlea segmentation problem in an unsupervised domain adaptation setting. Our proposed method leverages both the image-level domain alignment to minimize the domain divergence and semi-supervised training to further boost the performance. Furthermore, we propose to fuse the labels predicted from multiple models via noisy label correction. In the MICCAI 2021 crossMoDA challenge, our results on the final evaluation leaderboard showed that our proposed method has achieved promising segmentation performance with mean dice score of 79.9% and 82.5% and ASSD of 1.29 mm and 0.18 mm for VS tumor and cochlea, respectively. The cochlea ASSD achieved by our method has outperformed all other competing methods as well as the supervised nnU-Net.
翻译:磁共振成像(MRI)对VS治疗规划至关重要。尽管受监督的方法在VS分割中取得了令人满意的性能,但它们需要专家的完整说明,这既费力又费时。在这项工作中,我们的目标是在不受监督的域适应环境中解决VS和cochlea分割问题。我们提议的方法利用图像一级的域对齐来尽量减少域差和半监督培训来进一步提升性能。此外,我们提议通过响声标签校正将从多个模型预测的标签连接起来。在MICCAI 2021跨MODA挑战中,我们在最后评价领导板上的结果显示,我们拟议的方法已经实现了有希望的分离性能,平均dice分数分别为79.9%和82.5%,ASDSD为1.29毫米和0.18毫米。我们的方法所实现的COCHlea ASD已经超越了所有其他相互竞争的方法,作为受监督的NUNet。