Rehearsal, retraining on a stored small data subset of old tasks, has been proven effective in solving catastrophic forgetting in continual learning. However, due to the sampled data may have a large bias towards the original dataset, retraining them is susceptible to driving continual domain drift of old tasks in feature space, resulting in forgetting. In this paper, we focus on tackling the continual domain drift problem with centroid distance distillation. First, we propose a centroid caching mechanism for sampling data points based on constructed centroids to reduce the sample bias in rehearsal. Then, we present a centroid distance distillation that only stores the centroid distance to reduce the continual domain drift. The experiments on four continual learning datasets show the superiority of the proposed method, and the continual domain drift can be reduced.
翻译:在旧任务中储存的少量数据子集的再演,已证明在解决持续学习中的灾难性遗漏方面是有效的。然而,由于抽样数据可能在很大程度上偏向于原始数据集,再培训它们容易在地物空间中使旧任务持续地向外移动,从而导致遗忘。在本文中,我们的重点是通过中子远程蒸馏来解决持续地势漂流问题。首先,我们提议了一种中子固化固化机制,用于抽样数据点,以建造的固醇为基础,以减少彩排中的样本偏差。然后,我们提出了一个中子远程蒸馏,仅储存中子距离,以减少持续地域漂移。关于四个持续学习数据集的实验显示了拟议方法的优越性,并且可以减少持续地势漂移。</s>