Standard Unsupervised Domain Adaptation (UDA) methods assume the availability of both source and target data during the adaptation. In this work, we investigate Source-free Unsupervised Domain Adaptation (SF-UDA), a specific case of UDA where a model is adapted to a target domain without access to source data. We propose a novel approach for the SF-UDA setting based on a loss reweighting strategy that brings robustness against the noise that inevitably affects the pseudo-labels. The classification loss is reweighted based on the reliability of the pseudo-labels that is measured by estimating their uncertainty. Guided by such reweighting strategy, the pseudo-labels are progressively refined by aggregating knowledge from neighbouring samples. Furthermore, a self-supervised contrastive framework is leveraged as a target space regulariser to enhance such knowledge aggregation. A novel negative pairs exclusion strategy is proposed to identify and exclude negative pairs made of samples sharing the same class, even in presence of some noise in the pseudo-labels. Our method outperforms previous methods on three major benchmarks by a large margin. We set the new SF-UDA state-of-the-art on VisDA-C and DomainNet with a performance gain of +1.8% on both benchmarks and on PACS with +12.3% in the single-source setting and +6.6% in multi-target adaptation. Additional analyses demonstrate that the proposed approach is robust to the noise, which results in significantly more accurate pseudo-labels compared to state-of-the-art approaches.
翻译:----
无监督领域适应(UDA)标准方法假定在适应期间同时具有源数据和目标数据。在此工作中,我们研究了无源非监督领域适应(SF-UDA),即一种不需要访问源数据即可适应模型到目标领域的UDAs。我们提出了一种新的方法,基于损失重新加权策略实现了对噪声的鲁棒性,因为噪声不可避免地影响伪标签。根据测量它们的不确定性来重新加权分类损失和伪标签进一步改进,测量它们的地位。在此重权策略的指导下,从相邻样本聚合知识以逐步提高伪标签。此外,利用自监督对比框架作为目标空间正则化器来增强这种知识聚合,提出了一种新的负对排除策略,以识别并排除由共享相同类的样本组成的负对,在伪标签中存在噪声的情况下也能做到。我们的方法在三个主要基准测试中均大幅优于先前的方法。我们在VisDA-C和DomainNet上建立了新的SF-UDA状态下的技术先进水平,性能提高了1.8%,对于PACS,在单源设置中提高了12.3%,多目标适应中提高了6.6%。额外的分析表明,与现有方法相比,所提出的方法对噪声是有鲁棒性的,这导致比现有方法更准确的伪标签。