We introduce a novel segmentation-aware joint training framework called generative reinforcement network (GRN) that integrates segmentation loss feedback to optimize both image generation and segmentation performance in a single stage. An image enhancement technique called segmentation-guided enhancement (SGE) is also developed, where the generator produces images tailored specifically for the segmentation model. Two variants of GRN were also developed, including GRN for sample-efficient learning (GRN-SEL) and GRN for semi-supervised learning (GRN-SSL). GRN's performance was evaluated using a dataset of 69 fully annotated 3D ultrasound scans from 29 subjects. The annotations included six anatomical structures: dermis, superficial fat, superficial fascial membrane (SFM), deep fat, deep fascial membrane (DFM), and muscle. Our results show that GRN-SEL with SGE reduces labeling efforts by up to 70% while achieving a 1.98% improvement in the Dice Similarity Coefficient (DSC) compared to models trained on fully labeled datasets. GRN-SEL alone reduces labeling efforts by 60%, GRN-SSL with SGE decreases labeling requirements by 70%, and GRN-SSL alone by 60%, all while maintaining performance comparable to fully supervised models. These findings suggest the effectiveness of the GRN framework in optimizing segmentation performance with significantly less labeled data, offering a scalable and efficient solution for ultrasound image analysis and reducing the burdens associated with data annotation.
翻译:我们提出了一种新颖的分割感知联合训练框架,称为生成强化网络(GRN),该框架通过整合分割损失反馈,在单阶段内同时优化图像生成与分割性能。同时,开发了一种名为分割引导增强(SGE)的图像增强技术,其中生成器专门为分割模型生成定制化的图像。我们还开发了GRN的两种变体,包括用于样本高效学习的GRN(GRN-SEL)和用于半监督学习的GRN(GRN-SSL)。GRN的性能通过一个包含29名受试者的69个全标注三维超声扫描数据集进行评估。标注涵盖了六种解剖结构:真皮、浅层脂肪、浅筋膜膜(SFM)、深层脂肪、深筋膜膜(DFM)和肌肉。我们的结果表明,结合SGE的GRN-SEL在将标注工作量减少高达70%的同时,与在全标注数据集上训练的模型相比,Dice相似系数(DSC)提升了1.98%。单独的GRN-SEL可减少60%的标注工作量,结合SGE的GRN-SSL将标注需求降低70%,单独的GRN-SSL则减少60%,所有这些变体均保持了与全监督模型相当的性能。这些发现表明,GRN框架在显著减少标注数据量的情况下有效优化了分割性能,为超声图像分析提供了一个可扩展且高效的解决方案,并减轻了数据标注的负担。