The performance of nighttime semantic segmentation is restricted by the poor illumination and a lack of pixel-wise annotation, which severely limit its application in autonomous driving. Existing works, e.g., using the twilight as the intermediate target domain to perform the adaptation from daytime to nighttime, may fail to cope with the inherent difference between datasets caused by the camera equipment and the urban style. Faced with these two types of domain shifts, i.e., the illumination and the inherent difference of the datasets, we propose a novel domain adaptation framework via cross-domain correlation distillation, called CCDistill. The invariance of illumination or inherent difference between two images is fully explored so as to make up for the lack of labels for nighttime images. Specifically, we extract the content and style knowledge contained in features, calculate the degree of inherent or illumination difference between two images. The domain adaptation is achieved using the invariance of the same kind of difference. Extensive experiments on Dark Zurich and ACDC demonstrate that CCDistill achieves the state-of-the-art performance for nighttime semantic segmentation. Notably, our method is a one-stage domain adaptation network which can avoid affecting the inference time. Our implementation is available at https://github.com/ghuan99/CCDistill.
翻译:夜间语义分解的性能受到光化不足和缺乏像素的注释的限制,这严重限制了其在自主驱动中的应用。现有的作品,例如,利用黄昏作为中间目标域,从白天到夜间进行适应,可能无法应付由照相机设备和城市风格造成的数据集之间的内在差异。面对这两种类型的域变,即光化和数据集的内在差异,我们提议通过跨多隐性相关蒸馏(称为CCDIstill)建立一个新的域适应框架。充分探讨两种图像之间隐含的模糊性或内在差异,以弥补夜间图像缺乏标签的情况。具体地说,我们抽取各功能中的内容和风格知识,计算两种图像之间的内在或光化差异的程度。在使用相同差异的变异性来实现域调整。关于黑暗苏黎世和ACDC的广泛实验表明,CCDICD仍在实现我们可用的域域网/SDreavision-development,这是我们可用的域域域域网/Sentreal-development 的Segreal-deal-degrational-deal-development,这是我们可用的网络-deal-deal-stal-development se-deferview)