Panoptic segmentation, which is a novel task of unifying instance segmentation and semantic segmentation, has attracted a lot of attention lately. However, most of the previous methods are composed of multiple pathways with each pathway specialized to a designated segmentation task. In this paper, we propose to resolve panoptic segmentation in single-shot by integrating the execution flows. With the integrated pathway, a unified feature map called Panoptic-Feature is generated, which includes the information of both things and stuffs. Panoptic-Feature becomes more sophisticated by auxiliary problems that guide to cluster pixels that belong to the same instance and differentiate between objects of different classes. A collection of convolutional filters, where each filter represents either a thing or stuff, is applied to Panoptic-Feature at once, materializing the single-shot panoptic segmentation. Taking the advantages of both top-down and bottom-up approaches, our method, named SPINet, enjoys high efficiency and accuracy on major panoptic segmentation benchmarks: COCO and Cityscapes.
翻译:光学分离是统一实例分割和语义分割的一项新任务,最近引起了许多关注。然而,以往方法大多由多个路径组成,每个路径都专门用于指定分割任务。在本文件中,我们提议通过整合执行流程,以单发方式解决光学分割。通过集成路径,产生了一个统一的地貌图,名为光学-特征,包括东西的信息。光学-特征由于一些辅助问题而变得更加复杂,这些问题引导了属于同一实例的集成像素,并区分了不同类别的物体。每个过滤器都代表了某种东西,这些组合过滤器一次应用到泛光法中,将单发光谱分割法化成物质。我们称为SPINet(SPINet)的方法具有自上而下和自下两种方法的优势,在主要光学分解基准(COCO和Cityscovers)上具有很高的效率和准确性。