Synthetic data generation is an appealing approach to generate novel traffic scenarios in autonomous driving. However, deep learning perception algorithms trained solely on synthetic data encounter serious performance drops when they are tested on real data. Such performance drops are commonly attributed to the domain gap between real and synthetic data. Domain adaptation methods that have been applied to mitigate the aforementioned domain gap achieve visually appealing results, but usually introduce semantic inconsistencies into the translated samples. In this work, we propose a novel, unsupervised, end-to-end domain adaptation network architecture that enables semantically consistent \textit{sim2real} image transfer. Our method performs content disentanglement by employing shared content encoder and fixed style code.
翻译:合成数据生成是一种具有吸引力的方法,可以在自主驾驶中生成新的交通情况。 然而,在对合成数据进行测试时,仅接受过合成数据培训的深层次学习认知算法会遇到严重的性能下降。这种性能下降通常归因于真实数据与合成数据之间的领域差距。为缩小上述领域差距而应用的域适应方法取得了有视觉吸引力的结果,但通常会在翻译的样本中引入语义上的不一致。在这项工作中,我们提出了一个新的、不受监督的、端到端域适应网络结构,使语义上的一致性能够保持 \ textit{sim2real} 图像传输。 我们的方法通过使用共享的内容编码和固定样式代码来表达内容的分解。