Semantic segmentation models have two fundamental weaknesses: i) they require large training sets with costly pixel-level annotations, and ii) they have a static output space, constrained to the classes of the training set. Toward addressing both problems, we introduce a new task, Incremental Few-Shot Segmentation (iFSS). The goal of iFSS is to extend a pretrained segmentation model with new classes from few annotated images and without access to old training data. To overcome the limitations of existing models iniFSS, we propose Prototype-based Incremental Few-Shot Segmentation (PIFS) that couples prototype learning and knowledge distillation. PIFS exploits prototypes to initialize the classifiers of new classes, fine-tuning the network to refine its features representation. We design a prototype-based distillation loss on the scores of both old and new class prototypes to avoid overfitting and forgetting, and batch-renormalization to cope with non-i.i.d.few-shot data. We create an extensive benchmark for iFSS showing that PIFS outperforms several few-shot and incremental learning methods in all scenarios.
翻译:语义分解模型有两个基本弱点:(一) 需要大型培训设备,具有昂贵的像素级说明,以及(二) 它们有一个固定的输出空间,但限于培训组的类别。为了解决这两个问题,我们引入了一项新的任务,即增加小片片分解(iFSS)。 iFSS的目标是扩大一个预先培训的分解模型,从几个附加说明的图像中取出一个新的类别,而没有获得旧的培训数据。为了克服现有模式InisFSS的局限性,我们提议基于原型的增量小片分解(PIFS),即夫妇的原型学习和知识蒸馏。 PIFS利用原型来启动新类的分类,对网络进行微调,以完善其特征描述。我们设计了一个基于原型的旧类和新类原型分数的蒸馏损失模型,以避免过度装配配和遗忘,并批重新确定适应非i.i.d.few-phot数据。我们为iFSS提供了广泛的基准,表明PIFSS在几个截图式和渐进式学习方法上都超越了几个图。