Recent advances in unsupervised domain adaptation (UDA) techniques have witnessed great success in cross-domain computer vision tasks, enhancing the generalization ability of data-driven deep learning architectures by bridging the domain distribution gaps. For the UDA-based cross-domain object detection methods, the majority of them alleviate the domain bias by inducing the domain-invariant feature generation via adversarial learning strategy. However, their domain discriminators have limited classification ability due to the unstable adversarial training process. Therefore, the extracted features induced by them cannot be perfectly domain-invariant and still contain domain-private factors, bringing obstacles to further alleviate the cross-domain discrepancy. To tackle this issue, we design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning. Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module, respectively. By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
翻译:在未受监督的域适应(UDA)技术方面最近取得的进展在跨域计算机视野任务方面取得了巨大成功,通过缩小域分布差距,加强了数据驱动深学习结构的普及能力。对于基于UDA的跨域天体探测方法,大多数方法通过对抗性学习战略引导域差异特性生成,减轻了域偏见;然而,由于对抗性培训过程不稳定,它们的域歧视者分类能力有限。因此,它们产生的外在特性不能完全地包含域差异,仍然含有域-私因素,给进一步缩小跨域差异带来障碍。为了解决这一问题,我们设计了一种“主分解快速-RCNN(DDF)”方法,以消除探测任务特性中的特定源信息。我们的DDF方法促进了全球和地方阶段的特征分解,同时采用了全球三联分解模块和类似分解模块。通过四个基准UDA物体探测任务基准的功能,我们展示了DDF方法。