Contrastive learning has proven useful in many applications where access to labelled data is limited. The lack of annotated data is particularly problematic in medical image segmentation as it is difficult to have clinical experts manually annotate large volumes of data such as cardiac structures in ultrasound images of the heart. In this paper, We propose a self supervised contrastive learning method to segment the left ventricle from echocardiography when limited annotated images exist. Furthermore, we study the effect of contrastive pretraining on two well-known segmentation networks, UNet and DeepLabV3. Our results show that contrastive pretraining helps improve the performance on left ventricle segmentation, particularly when annotated data is scarce. We show how to achieve comparable results to state-of-the-art fully supervised algorithms when we train our models in a self-supervised fashion followed by fine-tuning on just 5\% of the data. We show that our solution outperforms what is currently published on a large public dataset (EchoNet-Dynamic) achieving a Dice score of 0.9252. We also compare the performance of our solution on another smaller dataset (CAMUS) to demonstrate the generalizability of our proposed solution. The code is available at (https://github.com/BioMedIA-MBZUAI/contrastive-echo).
翻译:在使用贴标签数据受到限制的许多应用中,相互对比的学习被证明是有用的。缺乏附加说明的数据在医疗图像分割方面特别成问题,因为临床专家难以人工对大量数据进行批注,例如心脏超声波图像中的心脏结构。在本文中,我们提出一种自我监督的对比学习方法,在有限的附加说明的图像存在时,将左心室与回声心动学分离,然后将数据只有5 ⁇ 微调。此外,我们研究了对两个众所周知的分隔网(UNet和DeepLabV3)的对比性前训练的效果。我们的结果显示,对比性训练有助于改善左心室分割的性能,特别是在附加说明的数据稀少时。我们展示了如何在以自我监督的方式培训模型时取得与状态完全受监督的算法的可比结果,随后只是对数据5 ⁇ 进行微调。我们发现,我们的解决方案超过了目前在大型公共数据集(EchoNet-Dynamic)上公布的数据分数为0.9252。我们还比较了我们在另一个较小的解决方案(CAM-MAMU)的可操作性)上,我们现有的普通解码。