Compared with natural images, medical images are difficult to acquire and costly to label. Contrastive learning, as an unsupervised learning method, can more effectively utilize unlabeled medical images. In this paper, we used a Transformer-based contrastive learning method and innovatively trained the contrastive learning network with transfer learning. Then, the output model was transferred to the downstream parotid segmentation task, which improved the performance of the parotid segmentation model on the test set. The improved DSC was 89.60%, MPA was 99.36%, MIoU was 85.11%, and HD was 2.98. All four metrics showed significant improvement compared to the results of using a supervised learning model as a pre-trained model for the parotid segmentation network. In addition, we found that the improvement of the segmentation network by the contrastive learning model was mainly in the encoder part, so this paper also tried to build a contrastive learning network for the decoder part and discussed the problems encountered in the process of building.
翻译:与自然图像相比,医疗图像很难获取,而且标签成本很高。 对比学习作为一种不受监督的学习方法,可以更有效地使用未贴标签的医疗图像。 在本文中,我们使用了一种基于变异器的对比学习方法,并创新地培训了与转移学习的对比学习网络。然后,产出模型被转移到下游的静地分解任务,这改善了测试集中静脉分解模型的性能。改进后的DSC为89.60%,MPA为99.36%,MIOU为85.11%,HD为2.98。与使用监督的学习模型作为预培训模式进行静脉分解网络的结果相比,所有四个指标都显示出显著的改进。 此外,我们发现对比学习模型对分解网络的改进主要是在编码部分,因此本文还试图为解密器部分建立一个对比学习网络,并讨论了在建筑过程中遇到的问题。