Paris-CARLA-3D is a dataset of several dense colored point clouds of outdoor environments built by a mobile LiDAR and camera system. The data are composed of two sets with synthetic data from the open source CARLA simulator (700 million points) and real data acquired in the city of Paris (60 million points), hence the name Paris-CARLA-3D. One of the advantages of this dataset is to have simulated the same LiDAR and camera platform in the open source CARLA simulator as the one used to produce the real data. In addition, manual annotation of the classes using the semantic tags of CARLA was performed on the real data, allowing the testing of transfer methods from the synthetic to the real data. The objective of this dataset is to provide a challenging dataset to evaluate and improve methods on difficult vision tasks for the 3D mapping of outdoor environments: semantic segmentation, instance segmentation, and scene completion. For each task, we describe the evaluation protocol as well as the experiments carried out to establish a baseline.
翻译:巴黎-CARLA-3D是一个由移动LIDAR和相机系统建造的多个浓厚彩色室外环境云层的数据集,数据由两组合成数据组成,其中两组数据来自开放源CARLA模拟器(7亿点)和在巴黎市获得的真实数据(6 000万点),因此名为Paris-CARLA-3D。该数据集的一个优点是模拟了开放源CARLA模拟器中相同的LIDAR和相机平台,与用于生成真实数据的模拟器一样。此外,利用CARLA的语义标记在真实数据上对分类进行了人工说明,从而能够测试从合成数据向真实数据传输的方法。该数据集的目的是提供一个具有挑战性的数据集,用以评估和改进3D室室外环境绘图的艰难愿景任务的方法:语系分解、实例分解和场景完成。对于每一项任务,我们描述评价程序是用来确定基线的实验。