Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations; while concurrently preserving the task-discriminability knowledge gathered from the labeled source data. However, the requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting. The trivial solution of realizing an effective original to generic domain mapping improves transferability but degrades task discriminability. Upon analyzing the hurdles from both theoretical and empirical standpoints, we derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off while duly respecting the privacy-oriented source-free setting. A simple but effective realization of the proposed insights on top of the existing source-free DA approaches yields state-of-the-art performance with faster convergence. Beyond single-source, we also outperform multi-source prior-arts across both classification and semantic segmentation benchmarks.
翻译:常规域适应(DA)技术旨在通过学习域-域-异性表示来改进域的可转让性;同时保留从标签源数据中收集的任务-差异性知识;然而,同时获取标签源和未标签目标的要求使其不适合具有挑战性的无源DA环境; 实现有效的原版到通用域绘图这一微不足道的解决办法提高了可转让性,但降低了任务差异性; 在从理论和经验角度分析各种障碍后,我们获得了新的见解,以表明原版和相应翻译的通用样本之间的混杂会促进可互不兼容性-可转让性交易,同时适当尊重面向隐私的无源环境; 简单而有效地实现关于现有无源DA方法顶端的拟议了解,可以更快地产生最新、最先进的效果; 除了单一来源之外,我们还超越了分类和语系分化基准的多源前项。