In this paper, we propose a new approach to applying point-level annotations for weakly-supervised panoptic segmentation. Instead of the dense pixel-level labels used by fully supervised methods, point-level labels only provide a single point for each target as supervision, significantly reducing the annotation burden. We formulate the problem in an end-to-end framework by simultaneously generating panoptic pseudo-masks from point-level labels and learning from them. To tackle the core challenge, i.e., panoptic pseudo-mask generation, we propose a principled approach to parsing pixels by minimizing pixel-to-point traversing costs, which model semantic similarity, low-level texture cues, and high-level manifold knowledge to discriminate panoptic targets. We conduct experiments on the Pascal VOC and the MS COCO datasets to demonstrate the approach's effectiveness and show state-of-the-art performance in the weakly-supervised panoptic segmentation problem. Codes are available at https://github.com/BraveGroup/PSPS.git.
翻译:在本文中,我们提出一种新的方法,将点数说明应用于监管不力的泛光截面。与其采用完全监督方法使用的密集像素水平标签,不如将点数标签作为每个目标的单一点作为监督,从而大大减轻批注负担。我们在端对端框架中提出问题,同时从点数标签产生全光假面,并从中学习。为了应对核心挑战,即泛光假像生成,我们提议了一种原则性方法,通过尽量减少像素到点的穿透成本来分解像素,这种成本是模范语义相似性、低度纹理提示和高层次的多重知识,以区别光学目标。我们在Pasalcal VOC和MS COCO数据集上进行实验,以展示该方法的有效性,并展示在弱度监督的全光谱分割问题上的状态。代码可在 https://github.com/BraveGroup/PSPSP.git上查阅。