Vehicle-to-Everything (V2X) network has enabled collaborative perception in autonomous driving, which is a promising solution to the fundamental defect of stand-alone intelligence including blind zones and long-range perception. However, the lack of datasets has severely blocked the development of collaborative perception algorithms. In this work, we release DOLPHINS: Dataset for cOllaborative Perception enabled Harmonious and INterconnected Self-driving, as a new simulated large-scale various-scenario multi-view multi-modality autonomous driving dataset, which provides a ground-breaking benchmark platform for interconnected autonomous driving. DOLPHINS outperforms current datasets in six dimensions: temporally-aligned images and point clouds from both vehicles and Road Side Units (RSUs) enabling both Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) based collaborative perception; 6 typical scenarios with dynamic weather conditions make the most various interconnected autonomous driving dataset; meticulously selected viewpoints providing full coverage of the key areas and every object; 42376 frames and 292549 objects, as well as the corresponding 3D annotations, geo-positions, and calibrations, compose the largest dataset for collaborative perception; Full-HD images and 64-line LiDARs construct high-resolution data with sufficient details; well-organized APIs and open-source codes ensure the extensibility of DOLPHINS. We also construct a benchmark of 2D detection, 3D detection, and multi-view collaborative perception tasks on DOLPHINS. The experiment results show that the raw-level fusion scheme through V2X communication can help to improve the precision as well as to reduce the necessity of expensive LiDAR equipment on vehicles when RSUs exist, which may accelerate the popularity of interconnected self-driving vehicles. DOLPHINS is now available on https://dolphins-dataset.net/.
翻译:车辆对一切(V2X)网络使得自动驾驶(V2X)能够形成协作感,这是解决独立智能(包括盲区和远程感知)根本缺陷的一个很有希望的解决方案。然而,由于缺乏数据集,协作感算算法的发展严重受阻。在这项工作中,我们发布了DOLPHINS: 车辆对车辆和车辆对基础设施(V2I)的识别数据集能够和谐地进行自我驱动; 一个新的模拟大型多视角多视角多视角自动驾驶数据集,为相互连接的自动驾驶提供了一个破碎的基准平台。 DOLPHINS超越了六维的当前数据集:时间校正图像和来自车辆和路边单位的点云。 我们通过协作感,可以使车辆对车辆和车辆对基础设施(V2I)的自我驱动系统; 动态天气条件的六种典型假设,可以使最互连通的自动驱动数据集; 仔细选择的观点,为关键区域和每个物体提供完整覆盖; 4276框架和292549天的实验室实验数据测算结果,可以使车辆对数字进行最精确的自动测算。