Deep learning has shown its human-level performance in various applications. However, current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes. This poses a challenge particularly in intelligent diagnosis systems where initially only training data of a limited number of diseases are available. In this case, updating the intelligent system with data of new diseases would inevitably downgrade its performance on previously learned diseases. Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning built on a fixed pre-trained feature extractor. In this model, knowledge of each old class can be compactly represented by a collection of statistical distributions, e.g. with Gaussian mixture models, and naturally kept from forgetting in continual learning over time. Unlike existing class-incremental learning methods, the proposed approach is not sensitive to the continual learning process and can be additionally well applied to the data-incremental learning scenario. Experiments on multiple medical and natural image classification tasks showed that the proposed approach outperforms state-of-the-art approaches which even keep some images of old classes during continual learning of new classes.
翻译:然而,目前的深层学习模式的特点是在学习新课程时灾难性地忘记了旧知识,这在智能诊断系统中尤其是一个挑战,因为最初只有有限疾病的培训数据,在这种情况下,更新智能系统,加上新疾病的数据,必然会降低其以前学到的疾病方面的表现。在人类大脑中学习新知识的过程的启发下,我们提出一种巴耶斯基因化模式,用于在固定的预先培训的特征提取器上不断学习。在这个模型中,每个旧类的知识都可以通过收集统计分布器,例如高斯混合模型,来集中体现,并自然地避免在长期不断学习中忘却。与现有的班级强化学习方法不同,拟议方法对持续学习过程并不敏感,而且可以进一步很好地应用于数据意识学习情景。关于多种医学和自然图像分类任务的实验表明,拟议的方法超越了最新的方法,在不断学习新课程的过程中甚至保留了旧班的一些图像。