Most object detection methods for autonomous driving usually assume a consistent feature distribution between training and testing data, which is not always the case when weathers differ significantly. The object detection model trained under clear weather might not be effective enough in foggy weather because of the domain gap. This paper proposes a novel domain adaptive object detection framework for autonomous driving under foggy weather. Our method leverages both image-level and object-level adaptation to diminish the domain discrepancy in image style and object appearance. To further enhance the model's capabilities under challenging samples, we also come up with a new adversarial gradient reversal layer to perform adversarial mining for the hard examples together with domain adaptation. Moreover, we propose to generate an auxiliary domain by data augmentation to enforce a new domain-level metric regularization. Experimental results on public benchmarks show the effectiveness and accuracy of the proposed method. The code is available at https://github.com/jinlong17/DA-Detect.
翻译:自动驾驶的大多数物体探测方法通常假定培训和测试数据之间有一致的特征分布,在天气大不相同的情况下并不总是如此。在明确天气下训练的物体探测模型由于领域差距,在雾化天气中可能不够有效。本文件提议为雾化天气下自动驾驶建立一个新的领域适应性物体探测框架。我们的方法利用图像水平和物体水平的适应来缩小图像样式和物体外观方面的域差异。为了进一步提高模型在挑战性样品下的能力,我们还提出了一个新的对抗性梯度逆转层,以对硬实例进行对抗性倾斜,同时对域进行调整。此外,我们提议通过数据增强来产生辅助域,以实施新的域级指标规范。公共基准的实验结果显示了拟议方法的有效性和准确性。该代码可在https://github.com/jinlong17/DA-serveg查阅。