We address the problem of class incremental learning, which is a core step towards achieving adaptive vision intelligence. In particular, we consider the task setting of incremental learning with limited memory and aim to achieve better stability-plasticity trade-off. To this end, we propose a novel two-stage learning approach that utilizes a dynamically expandable representation for more effective incremental concept modeling. Specifically, at each incremental step, we freeze the previously learned representation and augment it with additional feature dimensions from a new learnable feature extractor. This enables us to integrate new visual concepts with retaining learned knowledge. We dynamically expand the representation according to the complexity of novel concepts by introducing a channel-level mask-based pruning strategy. Moreover, we introduce an auxiliary loss to encourage the model to learn diverse and discriminate features for novel concepts. We conduct extensive experiments on the three class incremental learning benchmarks and our method consistently outperforms other methods with a large margin.
翻译:我们处理班级递增学习问题,这是实现适应性愿景智能的一个核心步骤。我们尤其考虑在记忆有限的情况下逐步学习的任务设置,目的是实现更好的稳定性和塑性权衡。为此,我们提出一个新的两阶段学习方法,利用动态的扩展代表性,以更有效地进行渐进式概念建模。具体地说,在每一个递增步骤,我们冻结以前学到的表述,从一个新的可学习特征提取器中增加额外的特征。这使我们能够将新的视觉概念与学习的知识结合在一起。我们根据新概念的复杂性,通过引入一个基于频道的面具操纵战略,积极扩大代表性。此外,我们引入了辅助性损失,鼓励模型学习多样化和区分新概念的特征。我们在三个班级的递增学习基准上进行了广泛的实验,我们的方法在很大的空间上始终优于其他方法。