Vehicle-to-everything (V2X), which denotes the collaboration between a vehicle and any entity in its surrounding, can fundamentally improve the perception in self-driving systems. As the individual perception rapidly advances, collaborative perception has made little progress due to the shortage of public V2X datasets. In this work, we present the V2X-Sim dataset, the first public large-scale collaborative perception dataset in autonomous driving. V2X-Sim provides: 1) well-synchronized recordings from roadside infrastructure and multiple vehicles at the intersection to enable collaborative perception, 2) multi-modality sensor streams to facilitate multi-modality perception, 3) diverse well-annotated ground truth to support various downstream tasks including detection, tracking, and segmentation. We seek to inspire research on multi-agent multi-modality multi-task perception, and our virtual dataset is promising to promote the development of collaborative perception before realistic datasets become widely available.
翻译:车辆到一切(V2X)是指车辆与周围任何实体之间的协作,可以从根本上改善自驾驶系统中的感知。随着个人感知的迅速发展,由于公共V2X数据集的短缺,协作感知没有取得什么进展。在这项工作中,我们介绍了V2X-Sim数据集,这是自主驾驶中第一个公共大规模协作感知数据集。V2X-Sim提供:(1)路边基础设施和多车辆在交叉路口的同步记录,以便形成协作感知;(2)多式感应器流,以促进多式感知;(3)多种有说明的地面真象,以支持包括探测、跟踪和分割在内的各种下游任务。我们力求激发对多剂多式多式多式感知的研究,而我们的虚拟数据集有望在现实数据集普及之前促进协作感知的发展。