Prototype learning and decoder construction are the keys for few-shot segmentation. However, existing methods use only a single prototype generation mode, which can not cope with the intractable problem of objects with various scales. Moreover, the one-way forward propagation adopted by previous methods may cause information dilution from registered features during the decoding process. In this research, we propose a rich prototype generation module (RPGM) and a recurrent prediction enhancement module (RPEM) to reinforce the prototype learning paradigm and build a unified memory-augmented decoder for few-shot segmentation, respectively. Specifically, the RPGM combines superpixel and K-means clustering to generate rich prototype features with complementary scale relationships and adapt the scale gap between support and query images. The RPEM utilizes the recurrent mechanism to design a round-way propagation decoder. In this way, registered features can provide object-aware information continuously. Experiments show that our method consistently outperforms other competitors on two popular benchmarks PASCAL-${{5}^{i}}$ and COCO-${{20}^{i}}$.
翻译:原型学习和解码器构造是几发分解的关键,然而,现有方法仅使用单一原型生成模式,无法应对各种规模物体的棘手问题;此外,以往方法采用的单向前传播方法可能会在解码过程中造成已登记特征的信息稀释;在这一研究中,我们提议建立一个丰富的原型生成模块(RPGM)和一个经常性预测增强模块(RPEM),以加强原型学习模式,并分别为微发分解建立一个统一的记忆增强解码器。具体来说,RPM组合了超级像素和K- means集群,以产生丰富的原型特征,同时形成互补规模关系,并调整支持图像和查询图像之间的比例差距。RPEM利用经常性机制设计了双向传播解码器。在这种方式下,已登记功能可以持续提供目标觉识信息。实验显示,我们的方法在两种流行基准PACAL-$=5 ⁇ ⁇ i$和CO-$=20 ⁇ $美元上始终优于其他竞争者。