Without access to the source data, source-free domain adaptation (SFDA) transfers knowledge from a source-domain trained model to target domains. Recently, SFDA has gained popularity due to the need to protect the data privacy of the source domain, but it suffers from catastrophic forgetting on the source domain due to the lack of data. To systematically investigate the mechanism of catastrophic forgetting, we first reimplement previous SFDA approaches within a unified framework and evaluate them on four benchmarks. We observe that there is a trade-off between adaptation gain and forgetting loss, which motivates us to design a consistency regularization to mitigate forgetting. In particular, we propose a continual source-free domain adaptation approach named CoSDA, which employs a dual-speed optimized teacher-student model pair and is equipped with consistency learning capability. Our experiments demonstrate that CoSDA outperforms state-of-the-art approaches in continuous adaptation. Notably, our CoSDA can also be integrated with other SFDA methods to alleviate forgetting.
翻译:没有访问源数据的情况下,无源域自适应(SFDA)将知识从源领域的训练模型转移到目标域。最近,由于需要保护源领域的数据隐私,SFDA变得越来越受欢迎,但它在源领域上由于缺乏数据而遭受灾难性遗忘。为了系统研究灾难性遗忘的机制,我们首先在统一框架内重新实现了先前的SFDA方法,并在四个基准测试中对其进行了评估。我们观察到,适应增益和遗忘损失之间存在一种权衡,这促使我们设计一种一致性正则化以减轻遗忘。特别地,我们提出了一种称为CoSDA的连续无源域自适应方法,它采用双速度优化的教师-学生模型对,并具备一致性学习能力。我们的实验证明,CoSDA在连续适应方面优于现有方法。值得注意的是,我们的CoSDA也可以与其他SFDA方法集成以减轻遗忘。