Most state-of-the-art instance segmentation methods have to be trained on densely annotated images. While difficult in general, this requirement is especially daunting for biomedical images, where domain expertise is often required for annotation and no large public data collections are available for pre-training. We propose to address the dense annotation bottleneck by introducing a proposal-free segmentation approach based on non-spatial embeddings, which exploits the structure of the learned embedding space to extract individual instances in a differentiable way. The segmentation loss can then be applied directly to instances and the overall pipeline can be trained in a fully- or weakly supervised manner, including the challenging case of positive-unlabeled supervision, where a novel self-supervised consistency loss is introduced for the unlabeled parts of the training data. We evaluate the proposed method on 2D and 3D segmentation problems in different microscopy modalities as well as on the Cityscapes and CVPPP instance segmentation benchmarks, achieving state-of-the-art results on the latter. The code is available at: https://github.com/kreshuklab/spoco
翻译:虽然一般而言,这一要求对于生物医学图像来说特别艰巨,因为批注往往需要域内的专门知识,而且培训前没有大量的公共数据收集。我们提议,通过采用基于非空间嵌入的无提案分解方法来解决密集的注解瓶颈问题,这种办法利用已学的嵌入空间结构,以不同方式提取个别案例。分解损失可以直接适用于各种案例,整个管道可以完全或薄弱的监督方式加以培训,包括正面无标签监督这一具有挑战性的案例,因为对培训数据中未贴标签的部分采用了新的自我监督一致性损失。我们评估了不同显微镜模式中2D和3D分解问题的拟议方法,以及城市景观和CVPP实例分解基准,从而在后一种模式上取得最新结果。