Virtual testing is a crucial task to ensure safety in autonomous driving, and sensor simulation is an important task in this domain. Most current LiDAR simulations are very simplistic and are mainly used to perform initial tests, while the majority of insights are gathered on the road. In this paper, we propose a lightweight approach for more realistic LiDAR simulation that learns a real sensor's behavior from test drive data and transforms this to the virtual domain. The central idea is to cast the simulation into an image-to-image translation problem. We train our pix2pix based architecture on two real world data sets, namely the popular KITTI data set and the Audi Autonomous Driving Dataset which provide both, RGB and LiDAR images. We apply this network on synthetic renderings and show that it generalizes sufficiently from real images to simulated images. This strategy enables to skip the sensor-specific, expensive and complex LiDAR physics simulation in our synthetic world and avoids oversimplification and a large domain-gap through the clean synthetic environment.
翻译:虚拟测试是确保自动驾驶安全的关键任务, 感官模拟是这一领域的一项重要任务。 目前大多数的 LiDAR 模拟非常简单, 并主要用于进行初始测试, 而大部分的洞察力是在路上收集的。 在本文中, 我们提议了一种较现实的 LiDAR 模拟的轻量级方法, 从测试驱动器数据中学习真实的传感器行为, 并将它转换到虚拟域。 中心思想是将模拟丢入图像到图像到图像翻译的问题 。 我们用两种真实的世界数据集, 即广受欢迎的 KITTI 数据集和 Audi 自动驾驶数据集, 提供 RGB 和 LiDAR 图像。 我们将这个网络用于合成图像, 并展示它从真实图像到模拟图像的足够概括性。 这个战略可以在我们的合成世界中跳过传感器特定、 昂贵和复杂的LIDAR 物理模拟, 并避免过度简单化和通过清洁合成环境进行大域图 。