Human beings can quickly adapt to environmental changes by leveraging learning experience. However, adapting deep neural networks to dynamic environments by machine learning algorithms remains a challenge. To better understand this issue, we study the problem of continual domain adaptation, where the model is presented with a labelled source domain and a sequence of unlabelled target domains. The obstacles in this problem are both domain shift and catastrophic forgetting. We propose Gradient Regularized Contrastive Learning (GRCL) to solve the obstacles. At the core of our method, gradient regularization plays two key roles: (1) enforcing the gradient not to harm the discriminative ability of source features which can, in turn, benefit the adaptation ability of the model to target domains; (2) constraining the gradient not to increase the classification loss on old target domains, which enables the model to preserve the performance on old target domains when adapting to an in-coming target domain. Experiments on Digits, DomainNet and Office-Caltech benchmarks demonstrate the strong performance of our approach when compared to the other state-of-the-art methods.
翻译:人类可以通过利用学习经验迅速适应环境变化。然而,通过机器学习算法使深神经网络适应动态环境仍然是一项挑战。为了更好地了解这一问题,我们研究持续领域适应问题,即模型使用一个标签源域和一系列无标签目标域。这个问题的障碍既包括域转移,也包括灾难性的遗忘。我们提议渐进式常规对抗学习(GRCL)来解决障碍。在我们的方法的核心,梯度正规化发挥两个关键作用:(1) 执行梯度,不要损害源特性的歧视性能力,而这反过来又有利于模型对目标域的适应能力;(2) 限制梯度,不增加旧目标域的分类损失,使模型能够在适应即将出现的目标域时保持旧目标域的绩效。在Digits、DomainNet和Office-Caltech基准上进行的实验表明,与其他最先进的方法相比,我们的方法表现很强。