Most standard learning approaches lead to fragile models which are prone to drift when sequentially trained on samples of a different nature - the well-known "catastrophic forgetting" issue. In particular, when a model consecutively learns from different visual domains, it tends to forget the past domains in favor of the most recent ones. In this context, we show that one way to learn models that are inherently more robust against forgetting is domain randomization - for vision tasks, randomizing the current domain's distribution with heavy image manipulations. Building on this result, we devise a meta-learning strategy where a regularizer explicitly penalizes any loss associated with transferring the model from the current domain to different "auxiliary" meta-domains, while also easing adaptation to them. Such meta-domains are also generated through randomized image manipulations. We empirically demonstrate in a variety of experiments - spanning from classification to semantic segmentation - that our approach results in models that are less prone to catastrophic forgetting when transferred to new domains.
翻译:大多数标准学习方法导致脆弱的模式,这些模式在按顺序对不同性质的样本进行训练时容易漂移—— 众所周知的“ 灾难性遗忘” 问题。 特别是, 当一个模型从不同的视觉领域连续学习时, 它往往会忘记过去的领域, 从而有利于最近的领域。 在这方面, 我们显示, 一种学习模型的方法是, 学习本能更强力防止遗忘的模型的方法是, 域随机化—— 用于视觉任务, 随机化当前域的分布, 并使用大量图像操纵。 在此基础上, 我们设计了一个元学习战略, 定期化器将模型从当前区域转移到不同的“ 辅助性” 元领域, 明确惩罚任何与模型相关的损失, 同时放松对它们的适应。 这种元领域也是通过随机化图像操作产生的。 我们从分类到语系分割的实验证明, 我们的方法导致模型在转移到新的领域时不易被灾难性地遗忘。