Few-shot segmentation aims to segment images containing objects from previously unseen classes using only a few annotated samples. Most current methods focus on using object information extracted, with the aid of human annotations, from support images to identify the same objects in new query images. However, background information can also be useful to distinguish objects from their surroundings. Hence, some previous methods also extract background information from the support images. In this paper, we argue that such information is of limited utility, as the background in different images can vary widely. To overcome this issue, we propose CobNet which utilises information about the background that is extracted from the query images without annotations of those images. Experiments show that our method achieves a mean Intersection-over-Union score of 61.4% and 37.8% for 1-shot segmentation on PASCAL-5i and COCO-20i respectively, outperforming previous methods. It is also shown to produce state-of-the-art performances of 53.7% for weakly-supervised few-shot segmentation, where no annotations are provided for the support images.
翻译:少量截图旨在使用仅几个附加说明的样本,对包含先前未见类别对象的图像进行分层分析。 多数当前方法侧重于使用从支持图像中提取的实物信息, 借助人类注释, 在新的查询图像中识别相同的对象。 但是, 背景资料也可能有助于区分对象与周围环境。 因此, 先前的一些方法也从支持图像中提取背景资料。 在本文中, 我们争辩说, 这些信息用处有限, 因为不同图像的背景可能大相径庭。 为了克服这一问题, 我们建议 CobNet 使用从查询图像中提取的背景信息, 没有这些图像的注释。 实验显示, 我们的方法在 PSCAL-5i 和 COCO-20i 的一分法上分别达到61.4%和37.8%的平均截分法, 超过以往的方法。 也显示, 生成了53.7%的状态性能, 用于微小的微截图段, 没有为支持图像提供说明。