The success of supervised deep learning models in medical image segmentation relies on detailed annotations. However, labor-intensive manual labeling is costly and inefficient, especially in dense object segmentation. To this end, we propose a self-supervised learning based approach with a Prior Self-activation Module (PSM) that generates self-activation maps from the input images to avoid labeling costs and further produce pseudo masks for the downstream task. To be specific, we firstly train a neural network using self-supervised learning and utilize the gradient information in the shallow layers of the network to generate self-activation maps. Afterwards, a semantic-guided generator is then introduced as a pipeline to transform visual representations from PSM to pixel-level semantic pseudo masks for downstream tasks. Furthermore, a two-stage training module, consisting of a nuclei detection network and a nuclei segmentation network, is adopted to achieve the final segmentation. Experimental results show the effectiveness on two public pathological datasets. Compared with other fully-supervised and weakly-supervised methods, our method can achieve competitive performance without any manual annotations.
翻译:在医疗图像分割方面,监督的深层次学习模式的成功取决于详细的说明。然而,劳动密集型人工标签成本高、效率低,特别是在密集物体分割方面。为此,我们提议采用自监督的学习方法,采用先自激活模块(PSM),从输入图像中生成自激活地图,以避免标签成本,并进一步为下游任务制作假面罩。具体地说,我们首先培训神经网络,使用自监督的学习,并利用网络浅层中的梯度信息生成自我激活地图。随后,引入一个语义指导生成器,作为管道,将视觉表现从PSM转换为像素级的象素级假面罩,用于下游任务。此外,为了实现最终分层化,我们采用了由核探测网络和核分解网络组成的两阶段培训模块。实验结果显示两个公共病理数据集的有效性。比其他完全超超弱的、受监督的方法,我们的方法可以在没有手动说明的情况下实现竞争性业绩。