Automatic methods to segment the vestibular schwannoma (VS) tumors and the cochlea from magnetic resonance imaging (MRI) are critical to VS treatment planning. Although supervised methods have achieved satisfactory performance in VS segmentation, they require full annotations by experts, which is laborious and time-consuming. In this work, we aim to tackle the VS and cochlea segmentation problem in an unsupervised domain adaptation setting. Our proposed method leverages both the image-level domain alignment to minimize the domain divergence and semi-supervised training to further boost the performance. Furthermore, we propose to fuse the labels predicted from multiple models via noisy label correction. Our results on the challenge validation leaderboard showed that our unsupervised method has achieved promising VS and cochlea segmentation performance with mean dice score of 0.8261 $\pm$ 0.0416; The mean dice value for the tumor is 0.8302 $\pm$ 0.0772. This is comparable to the weakly-supervised based method.
翻译:将磁共振成像( MRI) 中的前天肿瘤和脑膜瘤( 磁共振成像) 分离的自动方法对于 VS 处理规划至关重要。 虽然受监督的方法在 VS 分割中取得了令人满意的性能, 但是它们需要专家的完整说明, 这既费力又费时。 在这项工作中, 我们的目标是在不受监督的域适应设置中解决 VS 和 cochlea 分割问题。 我们提议的方法利用图像水平域对齐来尽量减少域差异, 并进行半监督的培训来进一步提升性能。 此外, 我们提议通过响声标签校正将从多个模型中预测的标签连接起来。 我们在挑战验证首板上的结果显示, 我们未监督的方法已经实现了有希望的 VS 和 Cochlea 分割性效果, 平均 dice 得分为 0. 8261 $\ pm 0.0416; 肿瘤的平均 dice值为 0. 8302 $\ pm 0.0772 。 这与薄弱的基于方法相似 。