In continual learning, new categories may be introduced over time, and an ideal learning system should perform well on both the original categories and the new categories. While deep neural nets have achieved resounding success in the classical supervised setting, they are known to forget about knowledge acquired in prior episodes of learning if the examples encountered in the current episode of learning are drastically different from those encountered in prior episodes. In this paper, we propose a new method that can both leverage the expressive power of deep neural nets and is resilient to forgetting when new categories are introduced. We found the proposed method can reduce forgetting by 2.3x to 6.9x on CIFAR-10 compared to existing methods and by 1.8x to 2.7x on ImageNet compared to an oracle baseline.
翻译:在不断学习的过程中,可以引入新的类别,理想的学习系统应该在原有类别和新类别上运行良好。虽然深神经网在古典监督环境中取得了巨大成功,但众所周知,如果当前学习阶段遇到的例子与以往阶段的情况大不相同,它们会忘记在以往学习阶段获得的知识。在本文中,我们提出了一种新的方法,既可以利用深神经网的表达力,也可以在引入新类别时有弹性地忘记。我们发现,与现有方法相比,拟议的方法可以减少CIFAR-10在CIFAR-10上的遗忘2.3x至6.9x,与甲骨文基准相比,图像网在1.8x至2.7x。