Image synthesis driven by computer graphics achieved recently a remarkable realism, yet synthetic image data generated this way reveals a significant domain gap with respect to real-world data. This is especially true in autonomous driving scenarios, which represent a critical aspect for overcoming utilizing synthetic data for training neural networks. We propose a method based on domain-invariant scene representation to directly synthesize traffic scene imagery without rendering. Specifically, we rely on synthetic scene graphs as our internal representation and introduce an unsupervised neural network architecture for realistic traffic scene synthesis. We enhance synthetic scene graphs with spatial information about the scene and demonstrate the effectiveness of our approach through scene manipulation.
翻译:以计算机图形为驱动的图像合成最近取得了惊人的现实主义,然而合成图像数据通过这种方式产生的合成图像数据揭示了现实世界数据方面的巨大领域差距。这在自主驱动情景中尤其如此,这是克服利用合成数据培训神经网络的关键方面。我们提出了一个基于域-异变场景演示的方法,以直接合成交通现场图像而无需拍摄。具体地说,我们依靠合成景象图作为内部代表,并为现实的交通场景合成引入一个不受监督的神经网络结构。我们用关于现场的空间信息加强合成景象图,并通过现场操纵展示我们做法的有效性。</s>