Multi-modality medical imaging is crucial in clinical treatment as it can provide complementary information for medical image segmentation. However, collecting multi-modal data in clinical is difficult due to the limitation of the scan time and other clinical situations. As such, it is clinically meaningful to develop an image segmentation paradigm to handle this missing modality problem. In this paper, we propose a prototype knowledge distillation (ProtoKD) method to tackle the challenging problem, especially for the toughest scenario when only single modal data can be accessed. Specifically, our ProtoKD can not only distillate the pixel-wise knowledge of multi-modality data to single-modality data but also transfer intra-class and inter-class feature variations, such that the student model could learn more robust feature representation from the teacher model and inference with only one single modality data. Our method achieves state-of-the-art performance on BraTS benchmark.
翻译:多模态医学成像对于临床治疗至关重要,因为它可以为医学图像分割提供补充信息。然而,在临床中收集多模态数据很困难,因为扫描时间和其他临床情况的限制。因此,开发一种图像分割范式来处理这个缺少模态问题在临床上具有重要意义。在本文中,我们提出了一种原型知识蒸馏(ProtoKD)方法来应对这一具有挑战性的问题,特别是在只能访问单一模态数据的最困难情况下。具体而言,我们的ProtoKD不仅可以将多模态数据的像素知识蒸馏到单一模态数据,而且还可以转移类内和类间特征变化,使得学生模型可以从教师模型中学习更稳健的特征表示,并仅使用一种单一模态数据进行推理。我们的方法在BraTS基准测试中取得了最先进的性能。