Contrastive learning has proven useful in many applications where access to labelled data is limited. The lack of annotated data is particularly problematic in medical image segmentation as it is difficult to have clinical experts manually annotate large volumes of data such as cardiac structures in ultrasound images of the heart. In this paper, we argue whether or not contrastive pretraining is helpful for the segmentation of the left ventricle in echocardiography images. Furthermore, we study the effect of contrastive pretraining on two well-known segmentation networks, UNet and DeepLabV3. Our results show that contrastive pretraining helps improve the performance on left ventricle segmentation, particularly when annotated data is scarce. We show how to achieve comparable results to state-of-the-art fully supervised algorithms when we train our models in a self-supervised fashion followed by fine-tuning on just 5\% of the data. We show that our solution outperforms what is currently published on a large public dataset (EchoNet-Dynamic) achieving a Dice score of 0.9211. We also compare the performance of our solution on another smaller dataset (CAMUS) to demonstrate the generalizability of our proposed solution. The code is available at (https://github.com/BioMedIA-MBZUAI/contrastive-echo).
翻译:在使用贴标签数据受到限制的许多应用中,相互对比的学习被证明是有用的。缺乏附加说明的数据在医疗图像分割方面特别成问题,因为临床专家难以人工对大量数据进行笔记,例如心脏超声波图像中的心脏结构。在本文中,我们争论对比性培训前训练是否有助于对回声心电图中的左心室进行分解。此外,我们研究了对两个众所周知的分隔网(UNet和DeepLabV3)的对比性训练前训练的效果。我们的结果显示,对比性训练有助于改进左心室分割的性能,特别是在附加注释的数据稀缺的情况下。我们展示了如何在以自我监督的方式培训我们的模型,然后只对数据中的5 ⁇ 进行微调时,实现与状态完全受监督的算法的对比结果。我们发现,我们的解决方案超过了目前在大型公共数据集(EchoNet-Dynamic)上公布的Dice评分为0.92111。我们还比较了我们在另一个较小型的解决方案(CAMUB/MIT)上的可操作性。