Class incremental learning(CIL) has attracted much attention, but most existing related works focus on fine-tuning the entire representation model, which inevitably results in much catastrophic forgetting. In the contrast, with a semantic-rich pre-trained representation model, parameter-additional-tuning (PAT) only changes very few parameters to learn new visual concepts. Recent studies have proved that PAT-based CIL can naturally avoid fighting against forgetting by replaying or distilling like most of the existing methods. However, we find that PAT-based CIL still faces serious semantic drift, the problem caused by classifier learning bias at different learning phases, which significantly reduces the performance of PAT-based CIL. To address this, we propose Incremental Prototype Tuning (IPT), a simple but effective method that tunes category prototypes for classification and learning example prototypes to compensate for semantic drift. Extensive experiments demonstrate that our method can effectively compensate for semantic drift. Combined with well-pre-trained Vit backbones and other PAT methods, IPT surpasses the state-of-the-art baselines on mainstream incremental learning benchmarks.
翻译:类类递增学习(CIL)已经引起很多关注,但大多数现有相关工作都侧重于微调整个代表模式,这不可避免地导致灾难性的遗忘。相反,由于语义上丰富的经培训前代表模式,参数附加调整(PAT)只改变很少的参数来学习新的视觉概念。最近的研究表明,基于PAT的CIL可以自然避免通过重放或像大多数现有方法一样蒸馏而忘记。然而,我们发现,基于PAT的CIL仍然面临着严重的语义漂移问题,因为分类者在不同学习阶段学习偏向导致的问题,大大降低了基于PAT的CIL的绩效。为了解决这个问题,我们提议了递增 Prototype Turning (IPT), 这是一种简单而有效的方法, 将分类和学习样本的分类和学习模式用于补偿语义流。 广泛的实验表明,我们的方法可以有效地补偿语义流。 与经过良好训练的Vit骨架和其他PAT方法相结合, IPT超过了主流渐进学习基准的状态基线。