Multi-modality medical imaging is crucial in clinical treatment as it can provide complementary information for medical image segmentation. However, collecting multi-modal data in clinical is difficult due to the limitation of the scan time and other clinical situations. As such, it is clinically meaningful to develop an image segmentation paradigm to handle this missing modality problem. In this paper, we propose a prototype knowledge distillation (ProtoKD) method to tackle the challenging problem, especially for the toughest scenario when only single modal data can be accessed. Specifically, our ProtoKD can not only distillate the pixel-wise knowledge of multi-modality data to single-modality data but also transfer intra-class and inter-class feature variations, such that the student model could learn more robust feature representation from the teacher model and inference with only one single modality data. Our method achieves state-of-the-art performance on BraTS benchmark. The code is available at \url{https://github.com/SakurajimaMaiii/ProtoKD}.
翻译:多模态医学影像在临床治疗中至关重要,因为它可以为医学图像分割提供互补信息。然而,由于扫描时间以及其他临床情况的限制,收集多模态数据在临床中变得困难。因此,开发一种图像分割范式来处理这种缺失模态问题在临床上具有意义。本文中,我们提出了一种原型知识蒸馏(ProtoKD)方法,以解决这个具有挑战性的问题,尤其是在只能访问单一模态数据的最困难的情况下。具体而言,我们的ProtoKD不仅可以将多模态数据的像素知识蒸馏到单一模态数据,而且还可以转移类内和类间特征变化,使得学生模型可以从教师模型中学习更加稳健的特征表示,并且仅使用一个单一模态数据进行推断。我们的方法在BraTS基准上实现了最先进的性能。代码可以在\url{https://github.com/SakurajimaMaiii/ProtoKD}上获得。