Real-world applications require the classification model to adapt to new classes without forgetting old ones. Correspondingly, Class-Incremental Learning (CIL) aims to train a model with limited memory size to meet this requirement. Typical CIL methods tend to save representative exemplars from former classes to resist forgetting, while recent works find that storing models from history can substantially boost the performance. However, the stored models are not counted into the memory budget, which implicitly results in unfair comparisons. We find that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work, especially for the case with limited memory budgets. As a result, we need to holistically evaluate different CIL methods at different memory scales and simultaneously consider accuracy and memory size for measurement. On the other hand, we dive deeply into the construction of the memory buffer for memory efficiency. By analyzing the effect of different layers in the network, we find that shallow and deep layers have different characteristics in CIL. Motivated by this, we propose a simple yet effective baseline, denoted as MEMO for Memory-efficient Expandable MOdel. MEMO extends specialized layers based on the shared generalized representations, efficiently extracting diverse representations with modest cost and maintaining representative exemplars. Extensive experiments on benchmark datasets validate MEMO's competitive performance.
翻译:现实世界应用要求分类模式适应新班级,而不会忘记旧班级。相应而言,分类强化学习(CIL)旨在培训一个记忆量有限的模型,以满足这一要求。典型的CIL方法往往使前班的代表性示范人员避免遗忘,而最近的工作发现,历史的存储模型可以大大提升记忆效果。然而,存储模型没有计入记忆预算,这间接导致不公平的比较。我们发现,在计算总预算的模型规模和比较与记忆体积一致的方法时,储蓄模型并非始终如一,特别是对于记忆预算有限的案例而言。因此,我们需要从整体上评价不同记忆规模的CIL方法,同时考虑测量的准确性和记忆大小。另一方面,我们深陷在构建记忆缓冲中可以大大提升记忆效率。我们通过分析网络不同层的影响,发现浅层和深层在CIL具有不同特征。我们为此提出一个简单而有效的基线,将缩写为MEMO,特别是在记忆节能扩展MOdel的情况下。因此,我们需要从不同记忆规模的角度评价不同的CIMO方法,同时考虑精确地将专业性层次用于衡量。