Although unsupervised domain adaptation methods have achieved remarkable performance in semantic scene segmentation in visual perception for self-driving cars, these approaches remain impractical in real-world use cases. In practice, the segmentation models may encounter new data that have not been seen yet. Also, the previous data training of segmentation models may be inaccessible due to privacy problems. Therefore, to address these problems, in this work, we propose a Continual Unsupervised Domain Adaptation (CONDA) approach that allows the model to continuously learn and adapt with respect to the presence of the new data. Moreover, our proposed approach is designed without the requirement of accessing previous training data. To avoid the catastrophic forgetting problem and maintain the performance of the segmentation models, we present a novel Bijective Maximum Likelihood loss to impose the constraint of predicted segmentation distribution shifts. The experimental results on the benchmark of continual unsupervised domain adaptation have shown the advanced performance of the proposed CONDA method.
翻译:尽管未经监督的域适应方法在自驾驶汽车视觉认知的语义场景分解方面取得了显著的成绩,但这些方法在现实世界使用的情况下仍然不切实际,实际上,分解模型可能遇到尚未看到的新数据;此外,由于隐私问题,先前的分解模型数据培训可能无法进入,因此,为了解决这些问题,我们建议采用连续不受监督的域适应(CONDA)方法,使模型能够不断学习和适应新数据的存在;此外,我们提出的方法在设计上没有要求获得先前的培训数据;为避免灾难性的遗忘问题并保持分解模型的性能,我们提出了一种新的双向最大相似性损失,以施加预计分解分布变化的制约;关于连续不受监督的域适应基准的实验结果显示了拟议的CONDA方法的先进性能。