One of the main motivations of studying continual learning is that the problem setting allows a model to accrue knowledge from past tasks to learn new tasks more efficiently. However, recent studies suggest that the key metric that continual learning algorithms optimize, reduction in catastrophic forgetting, does not correlate well with the forward transfer of knowledge. We believe that the conclusion previous works reached is due to the way they measure forward transfer. We argue that the measure of forward transfer to a task should not be affected by the restrictions placed on the continual learner in order to preserve knowledge of previous tasks. Instead, forward transfer should be measured by how easy it is to learn a new task given a set of representations produced by continual learning on previous tasks. Under this notion of forward transfer, we evaluate different continual learning algorithms on a variety of image classification benchmarks. Our results indicate that less forgetful representations lead to a better forward transfer suggesting a strong correlation between retaining past information and learning efficiency on new tasks. Further, we found less forgetful representations to be more diverse and discriminative compared to their forgetful counterparts.
翻译:研究持续学习的一个主要动机是,问题设置使从过去的任务中积累知识的模式能够更有效地学习新的任务;然而,最近的研究表明,持续学习算法优化、减少灾难性遗忘的关键衡量标准与知识的前沿转移不完全相关。我们认为,以前完成的工程之所以能够取得结论,是因为它们衡量向前转移的方式。我们认为,向一项任务的前沿转移的尺度不应受到对不断学习者的限制的影响,以便保留对以往任务的知识。相反,前向转移的衡量标准应该是,鉴于不断学习以往任务所产生的一系列表现,学习一项新的任务是多么容易。在这种前向转移的概念下,我们评估关于各种图像分类基准的不同的持续学习算法。我们的结果表明,较少的忘却表示导致更好的前瞻性转移,表明保留过去的信息与学习新任务的效率之间有着密切的联系。此外,我们发现,与遗忘的对应方相比,不那么忘记的表述更加多样化和具有歧视性。</s>