Transferring knowledge learned from the labeled source domain to the raw target domain for unsupervised domain adaptation (UDA) is essential to the scalable deployment of an autonomous driving system. State-of-the-art approaches in UDA often employ a key concept: utilize joint supervision signals from both the source domain (with ground-truth) and the target domain (with pseudo-labels) for self-training. In this work, we improve and extend on this aspect. We present ConDA, a concatenation-based domain adaptation framework for LiDAR semantic segmentation that: (1) constructs an intermediate domain consisting of fine-grained interchange signals from both source and target domains without destabilizing the semantic coherency of objects and background around the ego-vehicle; and (2) utilizes the intermediate domain for self-training. Additionally, to improve both the network training on the source domain and self-training on the intermediate domain, we propose an anti-aliasing regularizer and an entropy aggregator to reduce the detrimental effects of aliasing artifacts and noisy target predictions. Through extensive experiments, we demonstrate that ConDA is significantly more effective in mitigating the domain gap compared to prior arts.
翻译:从标签源域到未受监督域适应(UDA)原始目标域的转让知识,对于可扩展自主驱动系统的部署至关重要。UDA中最先进的方法经常使用一个关键概念:利用源域(地面实况)和目标域(假标签)的联合监督信号进行自我培训。在这项工作中,我们改进和扩大这方面的内容。我们为LiDAR语义分化提供了一个基于连接的域域适应框架ConDA,即:(1) 构建一个中间域,包括源域和目标域的精细交换信号,而不破坏自我载体周围物体和背景的语义一致性;(2) 利用中间域进行自我培训。此外,为了改进源域的网络培训和中间域的自我培训,我们建议采用一个反致假成像成像和诱变焦目标预测的域,以减少别具和目标预测的有害影响。通过广泛的实验,我们证明ConDA在比艺术之前的域比更有效。