Prototype learning is extensively used for few-shot segmentation. Typically, a single prototype is obtained from the support feature by averaging the global object information. However, using one prototype to represent all the information may lead to ambiguities. In this paper, we propose two novel modules, named superpixel-guided clustering (SGC) and guided prototype allocation (GPA), for multiple prototype extraction and allocation. Specifically, SGC is a parameter-free and training-free approach, which extracts more representative prototypes by aggregating similar feature vectors, while GPA is able to select matched prototypes to provide more accurate guidance. By integrating the SGC and GPA together, we propose the Adaptive Superpixel-guided Network (ASGNet), which is a lightweight model and adapts to object scale and shape variation. In addition, our network can easily generalize to k-shot segmentation with substantial improvement and no additional computational cost. In particular, our evaluations on COCO demonstrate that ASGNet surpasses the state-of-the-art method by 5% in 5-shot segmentation.
翻译:原型学习被广泛用于几发分块。 通常,通过平均全球对象信息,从支持特征中获得单一原型。 但是,使用一个原型来代表所有信息可能会导致模糊不清。 在本文中,我们提出两个新模块,称为超级像素制聚集(SGC),并指导原型分配(GPA),用于多个原型提取和分配。具体地说,SGC是一种无参数和无培训的方法,它通过聚合类似特性矢量来提取更具代表性的原型,而GPA能够选择匹配的原型来提供更准确的指导。通过将SGC和GPA结合起来,我们建议采用适应性超像素制成网络(ASGNet),这是一个轻量模型,适应对象规模和形状变异。此外,我们的网络可以很容易地概括Kshot分解,但大为改善,没有额外的计算成本。 特别是,我们对COCO的评估结果表明,ASGNet在5发分块中超过了5%的状态方法。