Class-incremental continual learning is a core step towards developing artificial intelligence systems that can continuously adapt to changes in the environment by learning new concepts without forgetting those previously learned. This is especially needed in the medical domain where continually learning from new incoming data is required to classify an expanded set of diseases. In this work, we focus on how old knowledge can be leveraged to learn new classes without catastrophic forgetting. We propose a framework that comprises of two main components: (1) a dynamic architecture with expanding representations to preserve previously learned features and accommodate new features; and (2) a training procedure alternating between two objectives to balance the learning of new features while maintaining the model's performance on old classes. Experiment results on multiple medical datasets show that our solution is able to achieve superior performance over state-of-the-art baselines in terms of class accuracy and forgetting.
翻译:类增量连续学习是开发人工智能系统的核心步骤,可以使其能够不断适应环境变化,通过学习新的概念而不忘记先前学习的概念。这在医学领域尤为需要,因为需要不断从新的输入数据中学习,以分类更广泛的疾病。在这项工作中,我们专注于如何利用旧知识来学习新类别而不会发生灾难性遗忘。我们提出了一个框架,包括两个主要组成部分:(1)具有扩展表示的动态架构,用于保留先前学习的特征并容纳新特征;和(2)交替两个目标的训练过程,以平衡学习新特征和维护模型在旧类别上的性能。多个医学数据集的实验结果表明,我们的解决方案能够在类别准确性和遗忘方面实现比现有基线更出色的性能。