Source-free object detection (SFOD) aims to transfer a detector pre-trained on a label-rich source domain to an unlabeled target domain without seeing source data. While most existing SFOD methods generate pseudo labels via a source-pretrained model to guide training, these pseudo labels usually contain high noises due to heavy domain discrepancy. In order to obtain better pseudo supervisions, we divide the target domain into source-similar and source-dissimilar parts and align them in the feature space by adversarial learning. Specifically, we design a detection variance-based criterion to divide the target domain. This criterion is motivated by a finding that larger detection variances denote higher recall and larger similarity to the source domain. Then we incorporate an adversarial module into a mean teacher framework to drive the feature spaces of these two subsets indistinguishable. Extensive experiments on multiple cross-domain object detection datasets demonstrate that our proposed method consistently outperforms the compared SFOD methods.
翻译:无源物体探测(SFOD)旨在将一个在标签丰富源域上预先训练过的探测器转移到一个未标明源数据的目标域;虽然大多数现有的SFOD方法通过源预先训练的培训模式产生假标签,但这些假标签通常含有高噪声,原因是域差异很大。为了获得更好的假监督,我们将目标域分为来源不同和来源不同部分,并通过对抗性学习在特征空间中加以调整。具体地说,我们设计了一种检测差异标准,以区分目标域。这一标准的动机是发现更大的检测差异意味着更高的回忆和与源域的相似性。然后,我们将一个对抗模块纳入一个平均的教师框架,以驱动这两个子群的特征空间是不可分化的。关于多个交叉物体探测数据集的广泛实验表明,我们拟议的方法始终高于与SFOD方法相比较的方法。