Full-stack autonomous driving perception modules usually consist of data-driven models based on multiple sensor modalities. However, these models might be biased to the sensor setup used for data acquisition. This bias can seriously impair the perception models' transferability to new sensor setups, which continuously occur due to the market's competitive nature. We envision sensor data abstraction as an interface between sensor data and machine learning applications for highly automated vehicles (HAD). For this purpose, we review the primary sensor modalities, camera, lidar, and radar, published in autonomous-driving related datasets, examine single sensor abstraction and abstraction of sensor setups, and identify critical paths towards an abstraction of sensor data from multiple perception configurations.
翻译:全式自动驾驶感知模块通常由基于多种传感器模式的数据驱动模型组成,然而,这些模型可能偏向于用于获取数据的传感器设置,这种偏向会严重损害感知模型向新的传感器设置的可转移性,而新的传感器设置由于市场的竞争性质而不断发生。我们设想传感器数据抽象化是传感器数据与高自动化车辆机器学习应用程序之间的接口。为此,我们审查以自动驱动相关数据集发布的主要传感器模式、相机、激光雷达和雷达,审查单一感知传感器设置的抽象和抽象性,并找出从多重感知配置中提取传感器数据的关键路径。