Automated segmentation in medical image analysis is a challenging task that requires a large amount of manually labeled data. However, manually annotating medical data is often laborious, and most existing learning-based approaches fail to accurately delineate object boundaries without effective geometric constraints. Contrastive learning, a sub-area of self-supervised learning, has recently been noted as a promising direction in multiple application fields. In this work, we present a novel Contrastive Voxel-wise Representation Distillation (CVRD) method with geometric constraints to learn global-local visual representations for volumetric medical image segmentation with limited annotations. Our framework can effectively learn global and local features by capturing 3D spatial context and rich anatomical information. Specifically, we introduce a voxel-to-volume contrastive algorithm to learn global information from 3D images, and propose to perform local voxel-to-voxel distillation to explicitly make use of local cues in the embedding space. Moreover, we integrate an elastic interaction-based active contour model as a geometric regularization term to enable fast and reliable object delineations in an end-to-end learning manner. Results on the Atrial Segmentation Challenge dataset demonstrate superiority of our proposed scheme, especially in a setting with a very limited number of annotated data. The code will be available at https://github.com/charlesyou999648/CVRD.
翻译:医疗图像分析中的自动分解是一个具有挑战性的任务,需要大量手工标签数据。然而,人工说明医疗数据往往是困难的,而且大多数现有的基于学习的方法无法在没有有效几何限制的情况下准确划定物体界限。最近发现,在多种应用领域,一个自监督学习的子领域,即反学,是自我监督学习的子领域,是一个很有希望的方向。在这项工作中,我们提出了一个具有几何限制的新颖的对比性Voxel-wise 演示蒸馏法(CVRD) 方法,以学习全球-地方对体积医学图像分解的视觉显示,但注释有限。我们的框架可以有效地学习全球和地方特征,方法是捕捉3D空间背景和丰富的解剖解剖资料。具体地说,我们引入了一种从3D图像中学习全球信息的反毒到量对比的算法,并提议进行本地 voxel-tovoxel 蒸馏,以明确利用嵌入空间中的本地提示。此外,我们将把一个基于弹性互动性的互动性动态轮廓模型,作为几何参数正规化的固定化术语,以便快速和可靠的天文标定出一个可操作数据,特别是Sqreareareal-deal-deal-dal-destralma,以显示一个可获取的数据。