Transferring knowledge learned from the labeled source domain to the raw target domain for unsupervised domain adaptation (UDA) is essential to the scalable deployment of autonomous driving systems. State-of-the-art methods in UDA often employ a key idea: utilizing joint supervision signals from both source and target domains for self-training. In this work, we improve and extend this aspect. We present ConDA, a concatenation-based domain adaptation framework for LiDAR segmentation that: 1) constructs an intermediate domain consisting of fine-grained interchange signals from both source and target domains without destabilizing the semantic coherency of objects and background around the ego-vehicle; and 2) utilizes the intermediate domain for self-training. To improve the network training on the source domain and self-training on the intermediate domain, we propose an anti-aliasing regularizer and an entropy aggregator to reduce the negative effect caused by the aliasing artifacts and noisy pseudo labels. Through extensive studies, we demonstrate that ConDA significantly outperforms prior arts in mitigating domain gaps.
翻译:将从标记源领域中学习的知识转移到原始目标领域中,以实现自主驾驶系统的规模化部署。在无监督领域自适应(UDA)中,最先进的方法通常采用一种关键思想:利用来自源领域和目标领域的联合监督信号进行自我训练。在本研究中,我们改进和扩展了这个方面。我们提出ConDA,一种基于 LiDAR 分割的拼接领域自适应框架,可以构造一个中间领域,该中间领域包含了来自源领域和目标领域的细粒度互换信号,同时不会破坏车辆周围对象和背景的语义连贯性;同时利用中间领域进行自我训练。为了改善在源领域的网络训练和中间领域的自我训练,我们提出了一个抗锯齿规则器和一个熵聚合器,以减少锯齿伪像和噪声伪标签所致的负面影响。通过广泛的研究,我们展示了ConDA在缓解领域差距方面明显优于之前的方法。