Autonomous driving is a popular research area within the computer vision research community. Since autonomous vehicles are highly safety-critical, ensuring robustness is essential for real-world deployment. While several public multimodal datasets are accessible, they mainly comprise two sensor modalities (camera, LiDAR) which are not well suited for adverse weather. In addition, they lack far-range annotations, making it harder to train neural networks that are the base of a highway assistant function of an autonomous vehicle. Therefore, we introduce a multimodal dataset for robust autonomous driving with long-range perception. The dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view. The collected data was captured in highway, urban, and suburban areas during daytime, night, and rain and is annotated with 3D bounding boxes with consistent identifiers across frames. Furthermore, we trained unimodal and multimodal baseline models for 3D object detection. Data are available at \url{https://github.com/aimotive/aimotive_dataset}.
翻译:自动驾驶是计算机视觉研究领域中的一个热门研究领域。由于自动驾驶汽车具有高度的安全重要性,因此确保其鲁棒性对于实际应用至关重要。虽然几个公共多模态数据集可供使用,但它们主要包括两种传感器模式(摄像头、LiDAR),这些模式不适用于恶劣天气。此外,它们缺乏远程注释,使得更难训练神经网络,这是自动驾驶汽车的高速公路辅助功能的基础。因此,我们介绍一个用于具有长程感知的稳健自动驾驶的多模态数据集。数据集包含176个场景,涵盖360度视野,具有同步和校准的LiDAR、摄像头和雷达传感器。收集的数据在白天、晚上和雨中捕获,覆盖公路、城市和郊区地区,并且用一致的标识符跨帧注释了3D边界框。此外,我们针对3D对象检测训练了单模态和多模态基准模型。数据可在\url{https://github.com/aimotive/aimotive_dataset}上获得。