Domain adaptation (DA) mitigates the domain shift problem when transferring knowledge from one annotated domain to another similar but different unlabeled domain. However, existing models often utilize one of the ImageNet models as the backbone without exploring others, and fine-tuning or retraining the backbone ImageNet model is also time-consuming. Moreover, pseudo-labeling has been used to improve the performance in the target domain, while how to generate confident pseudo labels and explicitly align domain distributions has not been well addressed. In this paper, we show how to efficiently opt for the best pre-trained features from seventeen well-known ImageNet models in unsupervised DA problems. In addition, we propose a recurrent pseudo-labeling model using the best pre-trained features (termed PRPL) to improve classification performance. To show the effectiveness of PRPL, we evaluate it on three benchmark datasets, Office+Caltech-10, Office-31, and Office-Home. Extensive experiments show that our model reduces computation time and boosts the mean accuracy to 98.1%, 92.4%, and 81.2%, respectively, substantially outperforming the state of the art.
翻译:域适应 (DA) 减轻了在将知识从一个附加说明的域向另一个类似但不同的未标记域转移时的域变问题。 但是, 现有模型经常使用一个图像网络模型作为主干, 而没有探索其他模型, 微调或再培训主干图像网络模型也很费时。 此外, 使用假标签来改进目标域的性能, 而没有很好地解决如何生成自信的伪标签和明确统一域分布。 在本文中, 我们展示了如何高效地选择从17个众所周知的未受监督的DA问题图像网络模型中的最佳预培训前功能。 此外, 我们提出一个经常性的假标签模型, 使用最佳预先训练的特性( PLPL) 来提高分类性能。 为了显示 PLPL 的有效性, 我们评估了三个基准数据集, Office+Caltech- 10, Office- 31 和 Office-Home 。 广泛的实验显示, 我们的模型减少了计算时间, 并将平均精度提高到98. 1%, 92. 和81. 2 大大超过艺术状态 。