Accurate segmentation of nodules in both 2D breast ultrasound (BUS) and 3D automated breast ultrasound (ABUS) is crucial for clinical diagnosis and treatment planning. Therefore, developing an automated system for nodule segmentation can enhance user independence and expedite clinical analysis. Unlike fully-supervised learning, weakly-supervised segmentation (WSS) can streamline the laborious and intricate annotation process. However, current WSS methods face challenges in achieving precise nodule segmentation, as many of them depend on inaccurate activation maps or inefficient pseudo-mask generation algorithms. In this study, we introduce a novel multi-agent reinforcement learning-based WSS framework called Flip Learning, which relies solely on 2D/3D boxes for accurate segmentation. Specifically, multiple agents are employed to erase the target from the box to facilitate classification tag flipping, with the erased region serving as the predicted segmentation mask. The key contributions of this research are as follows: (1) Adoption of a superpixel/supervoxel-based approach to encode the standardized environment, capturing boundary priors and expediting the learning process. (2) Introduction of three meticulously designed rewards, comprising a classification score reward and two intensity distribution rewards, to steer the agents' erasing process precisely, thereby avoiding both under- and over-segmentation. (3) Implementation of a progressive curriculum learning strategy to enable agents to interact with the environment in a progressively challenging manner, thereby enhancing learning efficiency. Extensively validated on the large in-house BUS and ABUS datasets, our Flip Learning method outperforms state-of-the-art WSS methods and foundation models, and achieves comparable performance as fully-supervised learning algorithms.
翻译:在二维乳腺超声(BUS)和三维自动乳腺超声(ABUS)中精确分割结节对于临床诊断与治疗规划至关重要。因此,开发自动化的结节分割系统能够提升用户独立性并加速临床分析。与全监督学习不同,弱监督分割(WSS)能够简化繁琐且复杂的标注流程。然而,现有WSS方法在实现精确结节分割方面面临挑战,因其多依赖于不准确的激活图或低效的伪掩码生成算法。本研究提出一种新颖的基于多智能体强化学习的WSS框架——翻转学习,该框架仅依赖二维/三维边界框即可实现精确分割。具体而言,多个智能体被用于从边界框中擦除目标区域以触发分类标签翻转,擦除区域即作为预测的分割掩码。本研究的核心贡献如下:(1)采用基于超像素/超体素的方法编码标准化环境,以捕捉边界先验并加速学习过程;(2)引入三种精心设计的奖励机制,包括分类得分奖励和两种强度分布奖励,以精准引导智能体的擦除过程,从而避免欠分割与过分割;(3)实施渐进式课程学习策略,使智能体能够以逐步增加难度的方式与环境交互,从而提升学习效率。在大型内部BUS与ABUS数据集上的广泛验证表明,我们的翻转学习方法在性能上超越了当前最先进的WSS方法与基础模型,并达到了与全监督学习算法相当的水平。