Continual learning enables incremental learning of new tasks without forgetting those previously learned, resulting in positive knowledge transfer that can enhance performance on both new and old tasks. However, continual learning poses new challenges for interpretability, as the rationale behind model predictions may change over time, leading to interpretability concept drift. We address this problem by proposing Interpretable Class-InCremental LEarning (ICICLE), an exemplar-free approach that adopts a prototypical part-based approach. It consists of three crucial novelties: interpretability regularization that distills previously learned concepts while preserving user-friendly positive reasoning; proximity-based prototype initialization strategy dedicated to the fine-grained setting; and task-recency bias compensation devoted to prototypical parts. Our experimental results demonstrate that ICICLE reduces the interpretability concept drift and outperforms the existing exemplar-free methods of common class-incremental learning when applied to concept-based models. We make the code available.
翻译:持续学习有助于在不忘以前所学的新任务的同时逐步学习新任务,从而产生积极的知识转让,从而提高新旧任务的业绩。然而,持续学习对解释性提出了新的挑战,因为模型预测背后的理由可能会随着时间的变化而改变,导致可解释性概念的漂移。我们通过提出可解释性分类脱轨(ICLE)来解决这个问题,这是一种无创意的无创意方法,采用原型部分基于部分的无创意方法。它包括三个关键的新颖之处:可解释性规范,在保留方便用户的积极推理的同时,将以前学到的概念蒸馏出来;基于近距离的原型初始化战略,专门用于细化的设置;以及专用于原型部分的任务-常量偏重性偏重补偿。我们的实验结果表明,ICLELE减少了可解释性概念的漂移,在应用基于概念的模式时,超越了现有的普通类零的学习方法。我们提供了这种代码。</s>