Learning models on one labeled dataset that generalize well on another domain is a difficult task, as several shifts might happen between the data domains. This is notably the case for lidar data, for which models can exhibit large performance discrepancies due for instance to different lidar patterns or changes in acquisition conditions. This paper addresses the corresponding Unsupervised Domain Adaptation (UDA) task for semantic segmentation. To mitigate this problem, we introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data. As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data. This novel strategy differs from classical minimization of statistical divergences or lidar-specific state-of-the-art domain adaptation techniques. Our experiments demonstrate that our method achieves a better performance than the current state of the art in synthetic-to-real and real-to-real scenarios.
翻译:学习适用于另一个域的一个标记数据集上的模型并不容易,因为数据域之间可能存在多个偏移。对于激光雷达数据来说尤其如此,因为由于不同的雷达模式或采集条件可能会导致模型的表现差异很大。本文针对相应的语义分割无监督域自适应(UDA)任务进行了探讨。为了减轻这个问题,我们引入了一个无监督的辅助任务,同时在源数据和目标数据上学习一个隐式的底层表面表示。由于两个域共享相同的潜在表示,因此模型必须适应两个数据源之间的差异。这种新颖的策略与传统的统计距离最小化或激光雷达特定的现有的域自适应技术不同。我们的实验表明,我们的方法在合成到真实和真实到真实的情况下实现了比当前技术水平更好的性能。