Given a model well-trained with a large-scale base dataset, Few-Shot Class-Incremental Learning (FSCIL) aims at incrementally learning novel classes from a few labeled samples by avoiding overfitting, without catastrophically forgetting all encountered classes previously. Currently, semi-supervised learning technique that harnesses freely-available unlabeled data to compensate for limited labeled data can boost the performance in numerous vision tasks, which heuristically can be applied to tackle issues in FSCIL, i.e., the Semi-supervised FSCIL (Semi-FSCIL). So far, very limited work focuses on the Semi-FSCIL task, leaving the adaptability issue of semi-supervised learning to the FSCIL task unresolved. In this paper, we focus on this adaptability issue and present a simple yet efficient Semi-FSCIL framework named Uncertainty-aware Distillation with Class-Equilibrium (UaD-CE), encompassing two modules UaD and CE. Specifically, when incorporating unlabeled data into each incremental session, we introduce the CE module that employs a class-balanced self-training to avoid the gradual dominance of easy-to-classified classes on pseudo-label generation. To distill reliable knowledge from the reference model, we further implement the UaD module that combines uncertainty-guided knowledge refinement with adaptive distillation. Comprehensive experiments on three benchmark datasets demonstrate that our method can boost the adaptability of unlabeled data with the semi-supervised learning technique in FSCIL tasks.
翻译:鉴于模型经过大量基础数据集的精密训练,少许热级高级学习(FSCIL)旨在通过避免过度装配,避免灾难性地忘记以前所有遇到的分类,从几个贴标签的样本中逐步学习新课程,避免过度装配,同时不遗漏以前所有遇到的分类。 目前,利用可自由获取的无标签数据来补偿有限标签数据,半监督的学习技术可以提高许多愿景任务的业绩,这些任务可以被超常地用于处理FSCIL的问题,即半超级级级高级学习(Semi-FSCIL )。迄今为止,关于SFSIL任务的工作非常有限,其重点是避免过度装配半超高级学习给FSCIL的任务。在本文件中,我们侧重于这一适应问题,并展示一个简单而高效的FICILIL框架,即由S-Q-CE(UD-CE)组成的两个模块,包括UAD和CE。 具体地说,当将未贴标签的升级的升级升级数据纳入每次递增量届会时,我们将CLILIL的精度技术的适应性问题的适应性数据模块,我们从逐步地运用了C-roal-roal-de 的升级的升级的模型到我们能的自我学习。