Class-incremental learning (CIL) learns a classification model with training data of different classes arising progressively. Existing CIL either suffers from serious accuracy loss due to catastrophic forgetting, or invades data privacy by revisiting used exemplars. Inspired by linear learning formulations, we propose an analytic class-incremental learning (ACIL) with absolute memorization of past knowledge while avoiding breaching of data privacy (i.e., without storing historical data). The absolute memorization is demonstrated in the sense that class-incremental learning using ACIL given present data would give identical results to that from its joint-learning counterpart which consumes both present and historical samples. This equality is theoretically validated. Data privacy is ensured since no historical data are involved during the learning process. Empirical validations demonstrate ACIL's competitive accuracy performance with near-identical results for various incremental task settings (e.g., 5-50 phases). This also allows ACIL to outperform the state-of-the-art methods for large-phase scenarios (e.g., 25 and 50 phases).
翻译:分类入门学习(CIL) 学习一种分类模式,其培训数据来自不同类别的逐渐形成; 现有的CIL要么由于灾难性的遗忘而出现严重的准确性损失,要么通过重新研究使用过的Explors而侵犯数据隐私; 在线性学习配方的启发下,我们建议采用分析级入门学习(ACIL),对过去的知识进行绝对记忆,同时避免破坏数据隐私(例如,不储存历史数据); 绝对记忆化表现在这样一种意义上,即使用ACIL提供的现有数据进行等级入门学习将产生与使用现有和历史样本的联合学习对应方相同的结果; 这一平等在理论上得到验证; 由于在学习过程中没有涉及历史数据,因此数据隐私得到保证; 经验验证表明ACIL的竞争性准确性表现与各种递增任务环境(例如,5-50阶段)的近乎相同的结果。 这也使得ACIL能够超越大型阶段(例如,25和50阶段)的先进方法。