Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift. Specifically, UDA methods try to align the source and target representations to improve the generalization on the target domain. Further, UDA methods work under the assumption that the source data is accessible during the adaptation process. However, in real-world scenarios, the labelled source data is often restricted due to privacy regulations, data transmission constraints, or proprietary data concerns. The Source-Free Domain Adaptation (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data. In this paper, we explore the SFDA setting for the task of adaptive object detection. To this end, we propose a novel training strategy for adapting a source-trained object detector to the target domain without source data. More precisely, we design a novel contrastive loss to enhance the target representations by exploiting the objects relations for a given target domain input. These object instance relations are modelled using an Instance Relation Graph (IRG) network, which are then used to guide the contrastive representation learning. In addition, we utilize a student-teacher based knowledge distillation strategy to avoid overfitting to the noisy pseudo-labels generated by the source-trained model. Extensive experiments on multiple object detection benchmark datasets show that the proposed approach is able to efficiently adapt source-trained object detectors to the target domain, outperforming previous state-of-the-art domain adaptive detection methods. Code and models are provided in \href{https://viudomain.github.io/irg-sfda-web/}{https://viudomain.github.io/irg-sfda-web/}.
翻译:无监督域自适应(UDA)是解决域偏移问题的有效方法。具体而言,UDA方法试图将源域和目标域的表示对齐,从而提高在目标域上的泛化能力。此外,UDA方法所依赖的假设是,在自适应过程中可以访问源数据。然而,在实际场景中,由于隐私规定、数据传输限制或专有数据方面的考虑,带标签的源数据通常是受限制的。无源域自适应(SFDA)设置旨在通过无需访问源数据来适应源训练模型,以缓解这些问题。本文探讨了自适应物体检测的SFDA设置。为此,我们提出了一种新的训练策略,用于在不使用源数据的情况下将源训练的物体检测器适应到目标域。更具体地说,我们设计了一种新的对比损失,通过利用给定目标域输入的对象关系来增强目标表示。这些对象实例关系是使用实例关系图(IRG)网络模拟的,然后用于指导对比表示学习。此外,我们利用基于学生-教师的知识蒸馏策略,以避免对源训练模型产生的噪声伪标签过度拟合。对多个物体检测基准数据集的大量实验表明,所提出的方法能够有效地将源训练的物体检测器适应到目标域,优于先前的领先域自适应检测方法。代码和模型可在 \href{https://viudomain.github.io/irg-sfda-web/}{https://viudomain.github.io/irg-sfda-web/} 上获得。