Fine-tuning and Domain Adaptation emerged as effective strategies for efficiently transferring deep learning models to new target tasks. However, target domain labels are not accessible in many real-world scenarios. This led to the development of Unsupervised Domain Adaptation (UDA) methods, which only employ unlabeled target samples. Furthermore, efficiency and privacy requirements may also prevent the use of source domain data during the adaptation stage. This challenging setting, known as Source-Free Unsupervised Domain Adaptation (SF-UDA), is gaining interest among researchers and practitioners due to its potential for real-world applications. In this paper, we provide the first in-depth analysis of the main design choices in SF-UDA through a large-scale empirical study across 500 models and 74 domain pairs. We pinpoint the normalization approach, pre-training strategy, and backbone architecture as the most critical factors. Based on our quantitative findings, we propose recipes to best tackle SF-UDA scenarios. Moreover, we show that SF-UDA is competitive also beyond standard benchmarks and backbone architectures, performing on par with UDA at a fraction of the data and computational cost. In the interest of reproducibility, we include the full experimental results and code as supplementary material.
翻译:微调和域适应是高效地将深学习模式转移给新的目标任务的有效战略,但是,在许多现实世界情景中,目标域标签无法进入,这导致开发了无监督域适应方法(UDA),这些方法只使用未贴标签的目标样本;此外,效率和隐私要求也可能防止在适应阶段使用源域数据;这一称为无源无监督的Doma适应(SF-UDA)的富有挑战性的环境,由于其在现实世界应用方面的潜力,研究人员和从业者越来越感兴趣。在本文件中,我们通过500个模型和74对域配对的大规模实验性研究,对SF-UDA的主要设计选择进行了首次深入分析。我们把正常化办法、培训前战略和主干结构确定为最关键的因素。我们根据定量调查结果,提出了最佳处理SF-UDA情景的方法。此外,我们表明SF-UDA的竞争力也超出标准基准和主干结构。我们与UDA一起以数据和计算成本的一小部分与UDA一起进行。我们的兴趣包括了全部实验和计算结果。