Unsupervised domain adaptation (UDA) involves a supervised loss in a labeled source domain and an unsupervised loss in an unlabeled target domain, which often faces more severe overfitting (than classical supervised learning) as the supervised source loss has clear domain gap and the unsupervised target loss is often noisy due to the lack of annotations. This paper presents RDA, a robust domain adaptation technique that introduces adversarial attacking to mitigate overfitting in UDA. We achieve robust domain adaptation by a novel Fourier adversarial attacking (FAA) method that allows large magnitude of perturbation noises but has minimal modification of image semantics, the former is critical to the effectiveness of its generated adversarial samples due to the existence of 'domain gaps'. Specifically, FAA decomposes images into multiple frequency components (FCs) and generates adversarial samples by just perturbating certain FCs that capture little semantic information. With FAA-generated samples, the training can continue the 'random walk' and drift into an area with a flat loss landscape, leading to more robust domain adaptation. Extensive experiments over multiple domain adaptation tasks show that RDA can work with different computer vision tasks with superior performance.
翻译:未受监督的域适应(UDA)涉及标签源域的受监督损失,以及未标签目标域的无监督损失,由于受监督源损失明显存在域间差距,而且由于缺少说明,未监督的目标损失往往很吵。本文介绍了RDA, 这是一种强力域适应技术,引入对抗性攻击,以减少UDA的过度适应。我们通过Fourier对抗性攻击(FAA)新颖方法实现了强力域适应,该方法允许大范围扰动噪音,但图像语义变化最小,前者对于其生成的对抗性样本的有效性至关重要,因为“域间差距”的存在。具体地说,FAA将图像分解成多个频率组成部分(FCs),并仅仅通过干扰某些捕捉到精度信息的FCs生成对抗性样本。有了FAA的样本,培训可以继续“随机行走”和漂移到一个有平坦损失地貌的地区,导致更稳健的域域性调整。