In this paper, we present the USTC FLICAR Dataset, which is dedicated to the development of simultaneous localization and mapping and precise 3D reconstruction of the workspace for heavy-duty autonomous aerial work robots. In recent years, numerous public datasets have played significant roles in the advancement of autonomous cars and unmanned aerial vehicles (UAVs). However, these two platforms differ from aerial work robots: UAVs are limited in their payload capacity, while cars are restricted to two-dimensional movements. To fill this gap, we create the Giraffe mapping robot based on a bucket truck, which is equipped with a variety of well-calibrated and synchronized sensors: four 3D LiDARs, two stereo cameras, two monocular cameras, Inertial Measurement Units (IMUs), and a GNSS/INS system. A laser tracker is used to record the millimeter-level ground truth positions. We also make its ground twin, the Okapi mapping robot, to gather data for comparison. The proposed dataset extends the typical autonomous driving sensing suite to aerial scenes. Therefore, the dataset is named FLICAR to denote flying cars. We believe this dataset can also represent the flying car scenarios, specifically the takeoff and landing of VTOL (Vertical Takeoff and Landing) flying cars. The dataset is available for download at: https://ustc-flicar.github.io.
翻译:本文提出了USTC FLICAR数据集,专门用于开发重型自主空中工作机器人的同时定位和映射以及精确的三维重建。近年来,许多公共数据集在自动驾驶汽车和无人机的发展中发挥了重要作用。然而,这两个平台与空中工作机器人不同:无人机的有效载荷能力受限,而汽车受限于二维运动。为了填补这一差距,我们创建了一个基于升降车的Giraffe映射机器人,它配备了各种校准良好且同步的传感器:四个三维LiDAR、两个立体相机、两个单目相机、惯性测量单元(IMUs)和一个GNSS/INS系统。激光跟踪仪被用来记录毫米级的地面真实位置。我们还制造了它的对应机器人Okapi映射机器人,以收集比较数据。所提出的数据集将典型的自动驾驶感应套件扩展到了空中场景。因此,该数据集被命名为FLICAR,代表飞行汽车。我们认为这个数据集也可以代表飞行汽车的场景,特别是垂直起降的垂直起降(VTOL)飞行汽车。该数据集可在https://ustc-flicar.github.io下载。