Domain adaptation aims at aligning the labeled source domain and the unlabeled target domain, and most existing approaches assume the source data is accessible. Unfortunately, this paradigm raises concerns in data privacy and security. Recent studies try to dispel these concerns by the Source-Free setting, which adapts the source-trained model towards target domain without exposing the source data. However, the Source-Free paradigm is still at risk of data leakage due to adversarial attacks to the source model. Hence, the Black-Box setting is proposed, where only the outputs of source model can be utilized. In this paper, we address both the Source-Free adaptation and the Black-Box adaptation, proposing a novel method named better target representation from Frequency Mixup and Mutual Learning (FMML). Specifically, we introduce a new data augmentation technique as Frequency MixUp, which highlights task-relevant objects in the interpolations, thus enhancing class-consistency and linear behavior for target models. Moreover, we introduce a network regularization method called Mutual Learning to the domain adaptation problem. It transfers knowledge inside the target model via self-knowledge distillation and thus alleviates overfitting on the source domain by learning multi-scale target representations. Extensive experiments show that our method achieves state-of-the-art performance on several benchmark datasets under both settings.
翻译:域适应旨在将标签源域与未标签目标域相匹配,而大多数现有办法则假定源数据是可获得的。不幸的是,这一范式引起了数据隐私和安全方面的关注。最近的研究试图通过“无源环境”消除这些关注。“无源环境”试图通过“无源环境”消除这些关注,“无源模式”使源培训模式适应目标域,而没有暴露源数据数据。然而,“无源模式”仍然由于对源模式的对立攻击而面临数据渗漏的风险。因此,提出了“无源环境”设置,其中只使用源模式的产出。在本文件中,我们讨论了“无源适应”和“黑包”适应问题,提出了一种名为“频率混合和相互学习(FMML)更好目标代表的新方法。具体地说,我们采用了一种新的数据增强技术,即“频率混合”技术,它突出了内部与任务有关的目标对象,从而增强了目标模型的阶级一致性和线性行为。我们提出了一种网络正规化方法,即只使用源模型的产出问题。我们通过自学蒸馏和黑箱适应,从而缓解了在源域域内过度适应。我们根据多级基准设定的多级目标模型,通过学习了多种目标模型,在两个基准模型下,在两个基准模型下,展示。