Deep neural networks have been widely studied in autonomous driving applications such as semantic segmentation or depth estimation. However, training a neural network in a supervised manner requires a large amount of annotated labels which are expensive and time-consuming to collect. Recent studies leverage synthetic data collected from a virtual environment which are much easier to acquire and more accurate compared to data from the real world, but they usually suffer from poor generalization due to the inherent domain shift problem. In this paper, we propose a Domain-Agnostic Contrastive Learning (DACL) which is a two-stage unsupervised domain adaptation framework with cyclic adversarial training and contrastive loss. DACL leads the neural network to learn domain-agnostic representation to overcome performance degradation when there exists a difference between training and test data distribution. Our proposed approach achieves better performance in the monocular depth estimation task compared to previous state-of-the-art methods and also shows effectiveness in the semantic segmentation task.
翻译:在诸如语义分割或深度估计等自主驱动应用中,深神经网络得到了广泛研究;然而,以监督的方式培训神经网络需要大量附加注释的标签,这些标签费用昂贵,需要花费大量时间收集;最近的研究利用从虚拟环境中收集的合成数据,与真实世界的数据相比,这些数据更容易获得,而且更加准确,但由于内在的域转移问题,这些数据通常不甚精确。在本文件中,我们建议采用一个“域-计量兼容性学习”(DACL),这是一个具有周期性对立培训和对比性损失的两阶段、不受监督的域适应框架。 DACL领导神经网络学习域-认知性代表,以便在培训和测试数据分布之间存在差异时克服性能退化。我们提议的方法在单层深度估计任务中实现更好的业绩,与以往的状态方法相比,还显示了语义分割任务的有效性。