Few-shot semantic segmentation aims to segment novel-class objects in a query image with only a few annotated examples in support images. Most of advanced solutions exploit a metric learning framework that performs segmentation through matching each pixel to a learned foreground prototype. However, this framework suffers from biased classification due to incomplete construction of sample pairs with the foreground prototype only. To address this issue, in this paper, we introduce a complementary self-contrastive task into few-shot semantic segmentation. Our new model is able to associate the pixels in a region with the prototype of this region, no matter they are in the foreground or background. To this end, we generate self-contrastive background prototypes directly from the query image, with which we enable the construction of complete sample pairs and thus a complementary and auxiliary segmentation task to achieve the training of a better segmentation model. Extensive experiments on PASCAL-5$^i$ and COCO-20$^i$ demonstrate clearly the superiority of our proposal. At no expense of inference efficiency, our model achieves state-of-the results in both 1-shot and 5-shot settings for few-shot semantic segmentation.
翻译:微小的语义分解法旨在将小类对象在查询图像中进行分解,只有几个附加说明的例子作为辅助图像。大多数先进的解决方案都利用一个通过将每个像素与已学过的前景原型相匹配来进行分解的衡量学习框架。然而,这一框架由于未完全建造带有浅色原型的样本配对与仅与浅色原型的样本配对而存在偏差的分类。为了解决这个问题,我们在本文件中将一个互补的自调任务引入了几张相近的语义分解。我们的新模型能够将一个区域的像素与这个区域的原型联系起来,而不管它们位于前方还是背景。为此,我们直接从查询图像中生成了自调背景原型,从而能够构建完整的样配对,从而形成一个互补和辅助的分解任务,从而实现对更好的分解模型的培训。关于PACAL-5$和CO-20$$$的大规模实验清楚地显示了我们提案的优越性。在不惜推论效率的情况下,我们的模型在片段和视场图中都取得了状态。