This paper presents a Simple and effective unsupervised adaptation method for Robust Object Detection (SimROD). To overcome the challenging issues of domain shift and pseudo-label noise, our method integrates a novel domain-centric augmentation method, a gradual self-labeling adaptation procedure, and a teacher-guided fine-tuning mechanism. Using our method, target domain samples can be leveraged to adapt object detection models without changing the model architecture or generating synthetic data. When applied to image corruptions and high-level cross-domain adaptation benchmarks, our method outperforms prior baselines on multiple domain adaptation benchmarks. SimROD achieves new state-of-the-art on standard real-to-synthetic and cross-camera setup benchmarks. On the image corruption benchmark, models adapted with our method achieved a relative robustness improvement of 15-25% AP50 on Pascal-C and 5-6% AP on COCO-C and Cityscapes-C. On the cross-domain benchmark, our method outperformed the best baseline performance by up to 8% AP50 on Comic dataset and up to 4% on Watercolor dataset.
翻译:本文介绍了一种简单而有效的、不受监督的强力物体探测适应方法(SimROD)。为了克服域变换和假标签噪音等具有挑战性的问题,我们的方法结合了一种新的以域为中心的增殖法、逐步的自我标签适应程序和教师指导的微调机制。使用我们的方法,目标域样可以用来调整物体探测模型,而不必改变模型结构或生成合成数据。在应用到图像腐败和高层跨域适应基准时,我们的方法优于多个域适应基准的先前基线。SimROD在标准实际合成和交叉合成标准设置基准方面达到了新的最新状态。在图像腐败基准方面,根据我们的方法调整的模型在帕斯卡-C上实现了15-25%的AP50,在CO-C和C市景C上实现了5-6%的AP。在跨域基准方面,我们的方法超过了最佳基线性能,在科米数据集上达到8%的AP50,在水色数据集上达到4%。