Recently, weakly-supervised image segmentation using weak annotations like scribbles has gained great attention, since such annotations are much easier to obtain compared to time-consuming and label-intensive labeling at the pixel/voxel level. However, because scribbles lack structure information of region of interest (ROI), existing scribble-based methods suffer from poor boundary localization. Furthermore, most current methods are designed for 2D image segmentation, which do not fully leverage the volumetric information if directly applied to image slices. In this paper, we propose a scribble-based volumetric image segmentation, Scribble2D5, which tackles 3D anisotropic image segmentation and improves boundary prediction. To achieve this, we augment a 2.5D attention UNet with a proposed label propagation module to extend semantic information from scribbles and a combination of static and active boundary prediction to learn ROI's boundary and regularize its shape. Extensive experiments on three public datasets demonstrate Scribble2D5 significantly outperforms current scribble-based methods and approaches the performance of fully-supervised ones. Our code is available online.
翻译:最近,使用微弱的注释(如刻字)等微弱监督的图像分割法,引起了人们的极大关注,因为与像素/伏特列一级耗时和标签密集的标签相比,这些注释更容易获得。然而,由于刻字缺乏相关区域的结构信息(ROI),现有基于刻字的方法存在边界定位差的问题。此外,大多数目前的方法是为2D图像分割法设计的,如果直接应用到图像切片中,则不能充分利用体积信息。在本文中,我们建议采用基于刻字的体积图像分割法,Scribb2D5, 处理3D类类类类的厌食图像分割法,并改进边界预测。为了实现这一目标,我们增加了2.5D注意Uet,建议采用一个标签传播模块,以扩展从刻字以及静态和活跃的边界预测法获得的语义信息,以学习ROI的边界界限并规范其形状。在三个公共数据集上进行的广泛实验显示Scrib2D5, 明显超越了目前基于刻字法的方法和我们完全可在线使用的代码的性。