Autonomous driving is a popular research area within the computer vision research community. Since autonomous vehicles are highly safety-critical, ensuring robustness is essential for real-world deployment. While several public multimodal datasets are accessible, they mainly comprise two sensor modalities (camera, LiDAR) which are not well suited for adverse weather. In addition, they lack far-range annotations, making it harder to train neural networks that are the base of a highway assistant function of an autonomous vehicle. Therefore, we introduce a multimodal dataset for robust autonomous driving with long-range perception. The dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view. The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain and is annotated with 3D bounding boxes with consistent identifiers across frames. Furthermore, we trained unimodal and multimodal baseline models for 3D object detection. Data are available at \url{https://github.com/aimotive/aimotive_dataset}.
翻译:自主驾驶是计算机视野研究界中一个受欢迎的研究领域,因为自主驾驶是高度安全的关键,确保稳健性对于现实世界的部署至关重要,虽然可以访问若干公共多式联运数据集,但它们主要包括两种不适于恶劣天气的传感器模式(Camera, LiDAR),此外,它们缺乏远程说明,因此更难培训作为自主车辆高速公路助理功能基础的神经网络,因此,我们为强力自主驾驶引入一个多式数据集,具有远程感知。数据集由176个场景组成,配有同步和校准的LIDAR、相机和雷达传感器,覆盖360度视野。所收集的数据是在白天、夜间和雨中在高速公路、城市和郊区收集的,并配有3D捆绑框,有一致的识别器。此外,我们为3D物体探测培训了单式和多式基线模型。数据可在以下网站查阅:https://github.com/aimitive/aimitive_dataset}。