This work introduces a new proposal-free instance segmentation method that builds on single-instance segmentation masks predicted across the entire image in a sliding window style. In contrast to related approaches, our method concurrently predicts all masks, one for each pixel, and thus resolves any conflict jointly across the entire image. Specifically, predictions from overlapping masks are combined into edge weights of a signed graph that is subsequently partitioned to obtain all final instances concurrently. The result is a parameter-free method that is strongly robust to noise and prioritizes predictions with the highest consensus across overlapping masks. All masks are decoded from a low dimensional latent representation, which results in great memory savings strictly required for applications to large volumetric images. We test our method on the challenging CREMI 2016 neuron segmentation benchmark where it achieves competitive scores.
翻译:这项工作引入了一种新的无建议实例分割法,该方法建立在以滑动窗口样式在整个图像中预测的单因子分割面罩的基础上。 与相关方法不同, 我们的方法同时预测了所有面罩, 每个像素一个, 从而解决了整个图像中的任何冲突。 具体地说, 重叠面罩的预测被合并成一个签名图的边分数, 该图随后被分割, 以同时获得所有最后实例。 结果是一种无参数方法, 对噪音非常有力, 并且以重叠面罩之间最高共识的预测优先排序。 所有面罩都从一个低维潜值表示法解码, 这导致大量记忆节约, 这对于大型体积图像的应用是绝对必要的。 我们测试了我们的方法, 在具有挑战性的 CREMI 2016 神经分割基准上, 在那里取得了竞争性的分数 。