Unsupervised domain adaptation aims to align a labeled source domain and an unlabeled target domain, but it requires to access the source data which often raises concerns in data privacy, data portability and data transmission efficiency. We study unsupervised model adaptation (UMA), or called Unsupervised Domain Adaptation without Source Data, an alternative setting that aims to adapt source-trained models towards target distributions without accessing source data. To this end, we design an innovative historical contrastive learning (HCL) technique that exploits historical source hypothesis to make up for the absence of source data in UMA. HCL addresses the UMA challenge from two perspectives. First, it introduces historical contrastive instance discrimination (HCID) that learns from target samples by contrasting their embeddings which are generated by the currently adapted model and the historical models. With the historical models, HCID encourages UMA to learn instance-discriminative target representations while preserving the source hypothesis. Second, it introduces historical contrastive category discrimination (HCCD) that pseudo-labels target samples to learn category-discriminative target representations. Specifically, HCCD re-weights pseudo labels according to their prediction consistency across the current and historical models. Extensive experiments show that HCL outperforms and state-of-the-art methods consistently across a variety of visual tasks and setups.
翻译:不受监督的域适应旨在将标签源域和未标签目标域相匹配,但需要从两个角度获取常常引起数据隐私、数据可移植性和数据传输效率关切的来源数据。我们研究未经监督的模式适应(UMA),或称为无源数据不受监督的域域适应(UMA),这是一个替代设置,旨在将经过源培训的模型调整为目标分布,而无需获取源数据。为此,我们设计了一种创新的历史对比学习(HCL)技术,利用历史来源假设弥补UMA缺乏源数据的情况。HCL从两个角度应对UMA挑战。首先,它引入了历史对比实例歧视(HCID),通过对比目前经调整的模式和历史模型产生的嵌入模型,从目标样本中学习。根据历史模型,HCID鼓励UMA学习实例差异性目标表述,同时保留源假设。第二,它引入了历史对比性类别歧视(HCLCD)类标样本,从两个角度应对UMA挑战。具体来说,HCCD引入了历史对比性实例歧视,并展示了当前视觉多样性的连续性标签,展示了历史和历史结构,并展示了历史结构的一致性标签。