The purpose of this study is to apply and evaluate out-of-the-box deep learning frameworks for the crossMoDA challenge. We use the CUT model, a model for unpaired image-to-image translation based on patchwise contrastive learning and adversarial learning, for domain adaptation from contrast-enhanced T1 MR to high-resolution T2 MR. As data augmentation, we generate additional images with vestibular schwannomas with lower signal intensity. For the segmentation task, we use the nnU-Net framework. Our final submission achieved mean Dice scores of 0.8299 in the validation phase and 0.8253 in the test phase. Our method ranked 3rd in the crossMoDA challenge.
翻译:本研究的目的是应用和评估跨MoDA挑战的箱外深层学习框架。我们使用CUT模型,即基于不巧取巧的对比学习和对抗性学习的未受控图像到图像翻译模型,用于从对比增强的T1MR到高分辨率T2MR的域性适应。作为数据增强,我们生成了更多信号强度较低的前甲状腺图像。关于分解任务,我们使用NNU-Net框架。我们提交的最后文件在验证阶段达到了0.8299和0.8253,在测试阶段达到了平均骰子分数。我们的方法在跨MDA挑战中排第3位。