Manual annotation of volumetric medical images, such as magnetic resonance imaging (MRI) and computed tomography (CT), is a labor-intensive and time-consuming process. Recent advancements in foundation models for video object segmentation, such as Segment Anything Model 2 (SAM 2), offer a potential opportunity to significantly speed up the annotation process by manually annotating one or a few slices and then propagating target masks across the entire volume. However, the performance of SAM 2 in this context varies. Our experiments show that relying on a single memory bank and attention module is prone to error propagation, particularly at boundary regions where the target is present in the previous slice but absent in the current one. To address this problem, we propose Short-Long Memory SAM 2 (SLM-SAM 2), a novel architecture that integrates distinct short-term and long-term memory banks with separate attention modules to improve segmentation accuracy. We evaluate SLM-SAM 2 on four public datasets covering organs, bones, and muscles across MRI, CT, and ultrasound videos. We show that the proposed method markedly outperforms the default SAM 2, achieving an average Dice Similarity Coefficient improvement of 0.14 and 0.10 in the scenarios when 5 volumes and 1 volume are available for the initial adaptation, respectively. SLM-SAM 2 also exhibits stronger resistance to over-propagation, reducing the time required to correct propagated masks by 60.575% per volume compared to SAM 2, making a notable step toward more accurate automated annotation of medical images for segmentation model development.
翻译:三维医学图像(如磁共振成像(MRI)和计算机断层扫描(CT))的人工标注是一项劳动密集且耗时的过程。视频对象分割基础模型(如Segment Anything Model 2(SAM 2))的最新进展,为通过手动标注一个或少数切片并将目标掩码传播至整个体积以显著加速标注流程提供了潜在机遇。然而,SAM 2在此背景下的性能表现不一。我们的实验表明,依赖单一记忆库和注意力模块容易导致误差传播,尤其在目标存在于前一切片但缺失于当前切片的边界区域。为解决此问题,我们提出短长记忆SAM 2(SLM-SAM 2),这是一种集成独立短期与长期记忆库及分离注意力模块以提升分割准确性的新型架构。我们在涵盖MRI、CT和超声视频的器官、骨骼和肌肉的四个公共数据集上评估SLM-SAM 2。结果表明,所提方法在初始适应阶段分别有5个体积和1个体积可用时,平均Dice相似系数分别提升0.14和0.10,显著优于默认SAM 2。SLM-SAM 2还表现出更强的抗过度传播能力,与SAM 2相比,每体积校正传播掩码所需时间减少60.575%,为分割模型开发中更精准的医学图像自动标注迈出了重要一步。