Self-supervised learning (SSL) has recently achieved promising performance for 3D medical image segmentation tasks. Most current methods follow existing SSL paradigm originally designed for photographic or natural images, which cannot explicitly and thoroughly exploit the intrinsic similar anatomical structures across varying medical images. This may in fact degrade the quality of learned deep representations by maximizing the similarity among features containing spatial misalignment information and different anatomical semantics. In this work, we propose a new self-supervised learning framework, namely Alice, that explicitly fulfills Anatomical invariance modeling and semantic alignment via elaborately combining discriminative and generative objectives. Alice introduces a new contrastive learning strategy which encourages the similarity between views that are diversely mined but with consistent high-level semantics, in order to learn invariant anatomical features. Moreover, we design a conditional anatomical feature alignment module to complement corrupted embeddings with globally matched semantics and inter-patch topology information, conditioned by the distribution of local image content, which permits to create better contrastive pairs. Our extensive quantitative experiments on two public 3D medical image segmentation benchmarks of FLARE 2022 and BTCV demonstrate and validate the performance superiority of Alice, surpassing the previous best SSL counterpart methods by 2.11% and 1.77% in Dice coefficients, respectively.
翻译:自我监督的学习(SSL)最近取得了3D医学图像分割任务的有希望的绩效。目前大多数方法都遵循了最初为摄影或自然图像设计的现有SSL范式,无法明确和彻底地利用不同医学图像的内在相似解剖结构。事实上,这可能会通过最大限度地提高含有空间不匹配信息和不同解剖语义的特征之间的相似性而降低所学深层表达质量。在这项工作中,我们提议了一个新的自我监督学习框架,即Alice,它明确满足了解剖变异建模和语义调整,通过精心结合区分和基因化目标而实现。Alice引入了新的对比学习战略,鼓励了不同分布但具有一致高层次语义的观点之间的相似性。此外,我们设计了一个有条件的解剖特征校准模块,以与全球相匹配的语义和匹配的表层信息来补充腐败的嵌入,以当地图像内容的分布为条件,从而能够分别创建更好的对比配方和配方。 Alice引入一个新的对比性学习策略战略第2段的定量实验,通过两个公共测试方法展示了以往的SL3级标准。</s>