The catastrophic forgetting of previously learnt classes is one of the main obstacles to the successful development of a reliable and accurate generative continual learning model. When learning new classes, the internal representation of previously learnt ones can often be overwritten, resulting in the model's "memory" of earlier classes being lost over time. Recent developments in neuroscience have uncovered a method through which the brain avoids its own form of memory interference. Applying a targeted exaggeration of the differences between features of similar, yet competing memories, the brain can more easily distinguish and recall them. In this paper, the application of such exaggeration, via the repulsion of replayed samples belonging to competing classes, is explored. Through the development of a 'reconstruction repulsion' loss, this paper presents a new state-of-the-art performance on the classification of early classes in the class-incremental learning dataset CIFAR100.
翻译:灾难性地忘记以前学过的班级是成功开发可靠和准确的基因持续学习模式的主要障碍之一。 在学习新班时,以前学过的班级的内部代表性往往会被推翻,导致早期班级的“模拟”随着时间推移而消失。神经科学的最近发展发现了一种方法,大脑通过这种方法避免了自己的记忆干扰形式。对类似但相互竞争的记忆的特征之间的差异进行有针对性的夸大,大脑可以更容易地区分和回忆它们。在本文中,通过排除属于相互竞争班级的重播样本,探索了这种夸张的运用。通过开发“重建反射”损失,本文展示了在课堂入门学习数据集中早期班级分类的最新表现。