Recent state-of-the-art source-free domain adaptation (SFDA) methods have focused on learning meaningful cluster structures in the feature space, which have succeeded in adapting the knowledge from source domain to unlabeled target domain without accessing the private source data. However, existing methods rely on the pseudo-labels generated by source models that can be noisy due to domain shift. In this paper, we study SFDA from the perspective of learning with label noise (LLN). Unlike the label noise in the conventional LLN scenario, we prove that the label noise in SFDA follows a different distribution assumption. We also prove that such a difference makes existing LLN methods that rely on their distribution assumptions unable to address the label noise in SFDA. Empirical evidence suggests that only marginal improvements are achieved when applying the existing LLN methods to solve the SFDA problem. On the other hand, although there exists a fundamental difference between the label noise in the two scenarios, we demonstrate theoretically that the early-time training phenomenon (ETP), which has been previously observed in conventional label noise settings, can also be observed in the SFDA problem. Extensive experiments demonstrate significant improvements to existing SFDA algorithms by leveraging ETP to address the label noise in SFDA.
翻译:最新最先进的无源域适应(SFDA)方法侧重于在地貌空间学习有意义的集群结构,这些方法成功地将源域知识转变为未贴标签的目标领域,而没有获得私人源数据;然而,现有方法依赖源模型产生的假标签,这些模型由于域变而可能吵闹;在本文中,我们从用标签噪音学习的角度研究SFDA。与传统LLN情景中的标签噪音不同,我们证明SFDA的标签噪音遵循不同的分配假设。我们还证明,这种差异使得现有的LLN方法依靠其分布假设无法解决SFDA的标签噪音。经验证据表明,在应用现有的LLN方法解决SFDA问题时,只取得了微小的改进。另一方面,虽然两种情景中的标签噪音存在根本差异,但我们从理论上证明,在常规标签噪音环境中已经观察到的早期培训现象(ETP)也可以在SFDA问题上观察到。在SFDA中,利用现有的RMA对SFDA的噪音问题加以利用。</s>