Online Class-Incremental (OCI) learning has sparked new approaches to expand the previously trained model knowledge from sequentially arriving data streams with new classes. Unfortunately, OCI learning can suffer from catastrophic forgetting (CF) as the decision boundaries for old classes can become inaccurate when perturbated by new ones. Existing literature have applied the data augmentation (DA) to alleviate the model forgetting, while the role of DA in OCI has not been well understood so far. In this paper, we theoretically show that augmented samples with lower correlation to the original data are more effective in preventing forgetting. However, aggressive augmentation may also reduce the consistency between data and corresponding labels, which motivates us to exploit proper DA to boost the OCI performance and prevent the CF problem. We propose the Enhanced Mixup (EnMix) method that mixes the augmented samples and their labels simultaneously, which is shown to enhance the sample diversity while maintaining strong consistency with corresponding labels. Further, to solve the class imbalance problem, we design an Adaptive Mixup (AdpMix) method to calibrate the decision boundaries by mixing samples from both old and new classes and dynamically adjusting the label mixing ratio. Our approach is demonstrated to be effective on several benchmark datasets through extensive experiments, and it is shown to be compatible with other replay-based techniques.
翻译:在线分类强化(OCI)学习已经激发出新的方法来扩大以往经过培训的模型知识,从按顺序到达的数据流和新类别的数据流中获取。 不幸的是,OCI学习可能会因灾难性的忘记而受到影响,因为旧类的决策界限在受新类别干扰时会变得不准确。现有文献应用了数据增强(DA)来减轻模式的忘记,而DA在OCI中的作用迄今还没有得到很好的理解。在本文中,我们理论上表明,与原始数据的相关性较低的强化样本在防止忘记方面更为有效。然而,积极增强还可能降低数据和相应标签之间的一致性,这促使我们利用适当的DA来提升 OCI的性能和防止CF问题。我们提出了同时将扩大样品及其标签混合起来的强化混合(EnMix)方法。这表明,在提高样本多样性的同时,与相应的标签保持强烈的一致性。此外,我们设计了一种适应性混合(AdpMix)方法,通过混合旧类和新类的样品来校准决定界限。我们展示了各种升级方法,通过其他的升级方法来调整。</s>