Most state-of-the-art instance segmentation methods have to be trained on densely annotated images. While difficult in general, this requirement is especially daunting for biomedical images, where domain expertise is often required for annotation. We propose to address the dense annotation bottleneck by introducing a proposal-free segmentation approach based on non-spatial embeddings, which exploits the structure of the learned embedding space to extract individual instances in a differentiable way. The segmentation loss can then be applied directly on the instances and the overall method can be trained on ground truth images where only a few objects are annotated, from scratch or in a semi-supervised transfer learning setting. In addition to the segmentation loss, our setup allows to apply self-supervised consistency losses on the unlabeled parts of the training data. We evaluate the proposed method on challenging 2D and 3D segmentation problems in different microscopy modalities as well as on the popular CVPPP instance segmentation benchmark where we achieve state-of-the-art results. The code is available at: https://github.com/kreshuklab/spoco
翻译:多数最先进的审校分解方法必须就高密度附加说明的图像进行培训。 一般来说,这一要求对于生物医学图像来说特别艰巨,因为通常需要域内专门知识来说明。 我们提议通过采用基于非空间嵌入的无建议分解方法来解决密集的注解瓶颈问题,这种办法利用已学的嵌入空间结构来以不同方式提取个别实例。 分解损失可以直接适用于案例,而总体方法也可以在只有少数物体从零到半监督转移学习环境作注释的实地真象上进行培训。 除了分解损失外,我们的设置允许在培训数据中未加标签的部分应用自我监督的一致性损失。 我们评估了不同显微镜模式中挑战 2D 和 3D 分解问题的拟议方法,以及我们取得最新成果的通用 CVPP 分解基准。 代码见: https://github.com/kreshuklab/sco。