The emergence of data-driven approaches for control and planning in robotics have highlighted the need for developing experimental robotic platforms for data collection. However, their implementation is often complex and expensive, in particular for flying and terrestrial robots where the precise estimation of the position requires motion capture devices (MoCap) or Lidar. In order to simplify the use of a robotic platform dedicated to research on a wide range of indoor and outdoor environments, we present a data validation tool for ego-pose estimation that does not require any equipment other than the on-board camera. The method and tool allow a rapid, visual and quantitative evaluation of the quality of ego-pose sensors and are sensitive to different sources of flaws in the acquisition chain, ranging from desynchronization of the sensor flows to misevaluation of the geometric parameters of the robotic platform. Using computer vision, the information from the sensors is used to calculate the motion of a semantic scene point through its projection to the 2D image space of the on-board camera. The deviations of these keypoints from references created with a semi-automatic tool allow rapid and simple quality assessment of the data collected on the platform. To demonstrate the performance of our method, we evaluate it on two challenging standard UAV datasets as well as one dataset taken from a terrestrial robot.
翻译:由数据驱动的机器人控制和规划方法的出现突出表明了开发实验机器人平台以收集数据的必要性,然而,其实施往往复杂而昂贵,特别是对于精确估计位置需要运动捕获装置(Mocap)或Lidar的飞行和地面机器人而言,其实施往往十分复杂和昂贵,特别是对于精确估计位置需要运动捕获装置(Mocap)或Lidar的飞行和地面机器人。为了简化使用专用于研究各种室内和室外环境的机器人平台,我们提出了一个自我应用估计的数据验证工具,该工具不需要机上照相机以外的任何设备。该方法和工具使得能够对自我应用传感器的质量进行快速、视觉和定量评估,并且对获取链中的不同缺陷来源十分敏感,从传感器流动的脱线到对机器人平台的几何参数的错误评价不等。为了利用计算机愿景,传感器的信息通过向机上照相机上的2D图像空间投影计算出一个语义场点的动作。这些关键点与半自动工具创建的参考点的偏差,使得能够对平台上收集的数据进行快速和简单的质量评估。我们用一种数据作为地面数据的标准,我们用一种数据来评估。