Enhancing model prediction confidence on target data is an important objective in Unsupervised Domain Adaptation (UDA). In this paper, we explore adversarial training on penultimate activations, i.e., input features of the final linear classification layer. We show that this strategy is more efficient and better correlated with the objective of boosting prediction confidence than adversarial training on input images or intermediate features, as used in previous works. Furthermore, with activation normalization commonly used in domain adaptation to reduce domain gap, we derive two variants and systematically analyze the effects of normalization on our adversarial training. This is illustrated both in theory and through empirical analysis on real adaptation tasks. Extensive experiments are conducted on popular UDA benchmarks under both standard setting and source-data free setting. The results validate that our method achieves the best scores against previous arts. Code is available at https://github.com/tsun/APA.
翻译:增强目标数据模型预测信心是无人监督的域域适应(UDA)的一个重要目标。在本文件中,我们探讨了关于倒数第二激活的对抗性培训,即最后线性分类层的输入特征。我们表明,这一战略比以往工作中使用的关于输入图像或中间特征的对抗性培训更高效,更符合提高预测信心的目标。此外,随着在域适应中普遍使用的启动正常化以减少域间差距,我们得出两个变式,并系统地分析正常化对我们的对抗性培训的影响,这在理论和对实际适应任务的经验分析中都有说明。在标准设定和源数据自由设置下都对通用的UDA基准进行了广泛的实验。结果证实,我们的方法比以往的艺术取得了最佳分数。守则可在https://github.com/tsun/APA查阅。</s>