Currently intelligent diagnosis systems lack the ability of continually learning to diagnose new diseases once deployed, under the condition of preserving old disease knowledge. In particular, updating an intelligent diagnosis system with training data of new diseases would cause catastrophic forgetting of old disease knowledge. To address the catastrophic forgetting issue, a novel adapter-based strategy is proposed to help effectively learn a set of new diseases at each round (or task) of continual learning, without changing the shared feature extractor. The learnable lightweight task-specific adapter(s) can be flexibly designed (e.g., two convolutional layers) and then added to the pretrained and fixed feature extractor. Together with a specially designed task-specific head which absorbs all previously learned old diseases as a single 'out-of-distribution' category, task-specific adapter(s) can help the pretrained feature extractor more effectively extract discriminative features between diseases. In addition, a simple yet effective fine-tuning is applied to collaboratively fine-tune multiple task-specific heads such that outputs from different heads are comparable and consequently the appropriate classifier head can be more accurately selected during model inference. Extensive empirical evaluations on three image datasets demonstrate the superior performance of the proposed method in continual learning of new diseases. The source code will be released publicly.
翻译:目前,智能诊断系统在部署后缺乏持续学习以诊断新疾病的能力,同时还要保留旧疾病知识。特别地,使用新疾病的训练数据更新智能诊断系统将会导致旧疾病知识的灾难性遗忘。为了解决这一问题,提出了一种新的适配器策略,在不改变共享特征提取器的情况下,帮助每一轮(或任务)的持续学习有效地学习一组新疾病。可学习的轻量级任务特定适配器(例如,两个卷积层)可以灵活设计,然后添加到预训练并固定的特征提取器中。与特别设计的任务特定头部一起,吸收所有以前学习的旧疾病作为一个单独的'离群'类别,任务特定适配器可以帮助预训练特征提取器更有效地提取疾病之间的可区分特征。此外,采用了简单而有效的微调方法,协同微调多个任务特定头部,这样来自不同头部的输出是可比较的,因此可以在模型推理过程中更准确地选择适当的分类器头部。对三个图像数据集的大量实证评估表明,所提出的方法在新疾病的持续学习方面具有优异的性能。源代码将公开发布。