Rehearsal approaches in class incremental learning (CIL) suffer from decision boundary overfitting to new classes, which is mainly caused by two factors: insufficiency of old classes data for knowledge distillation and imbalanced data learning between the learned and new classes because of the limited storage memory. In this work, we present a simple but effective approach to tackle these two factors. First, we employ a re-sampling strategy and Mixup K}nowledge D}istillation (Re-MKD) to improve the performances of KD, which would greatly alleviate the overfitting problem. Specifically, we combine mixup and re-sampling strategies to synthesize adequate data used in KD training that are more consistent with the latent distribution between the learned and new classes. Second, we propose a novel incremental influence balance (IIB) method for CIL to tackle the classification of imbalanced data by extending the influence balance method into the CIL setting, which re-weights samples by their influences to create a proper decision boundary. With these two improvements, we present the effective decision boundary learning algorithm (EDBL) which improves the performance of KD and deals with the imbalanced data learning simultaneously. Experiments show that the proposed EDBL achieves state-of-the-art performances on several CIL benchmarks.
翻译:班级递增学习(CIL)的排练方法因适应新班级的决定界限而受到影响,这主要是由于两个因素造成的:知识蒸馏的旧班数据不足,以及由于存储记忆有限,新班和新班之间数据学习不平衡。在这项工作中,我们提出了解决这两个因素的简单而有效的方法。首先,我们采用重新抽样战略和Mixup K}Nowledge D}stillation(Re-MKD)来改进KD的性能,这将大大缓解过大的问题。具体地说,我们结合混合和重新采样战略来综合KD培训中使用的适当数据,这些数据与所学和新班之间的潜在分布更加一致。第二,我们提出了一个新的渐进影响平衡方法,以便CIL通过将影响平衡方法扩大到CIL设置,通过它们的影响来重新加权样品来创造正确的决定界限。我们提出了有效的决定边界学习算法(EDBL),改进了KD的性能,同时显示CDL的性能。我们提出了若干项数据平衡。