In dynamic and cramped industrial environments, achieving reliable Visual Teach and Repeat (VT&R) with a single-camera is challenging. In this work, we develop a robust method for non-synchronized multi-camera VT&R. Our contribution are expected Camera Performance Models (CPM) which evaluate the camera streams from the teach step to determine the most informative one for localization during the repeat step. By actively selecting the most suitable camera for localization, we are able to successfully complete missions when one of the cameras is occluded, faces into feature poor locations or if the environment has changed. Furthermore, we explore the specific challenges of achieving VT&R on a dynamic quadruped robot, ANYmal. The camera does not follow a linear path (due to the walking gait and holonomicity) such that precise path-following cannot be achieved. Our experiments feature forward and backward facing stereo cameras showing VT&R performance in cluttered indoor and outdoor scenarios. We compared the trajectories the robot executed during the repeat steps demonstrating typical tracking precision of less than 10cm on average. With a view towards omni-directional localization, we show how the approach generalizes to four cameras in simulation. Video: https://youtu.be/iAY0lyjAnqY
翻译:在充满活力和拥挤的工业环境中,实现可靠的视觉教学和重复(VT&R)以及单一相机具有挑战性。在这项工作中,我们为一个不同步的多镜头VT&R开发了一种强有力的方法。我们的贡献是预期的相机性能模型(CPM),这些模型评估了教学阶段的相机流,以确定在重复步骤中最丰富的地方化信息流。通过积极选择最适合本地化的相机,我们能够在其中一台相机被隐蔽、面部为特征差的地方或环境发生变化的地方成功完成飞行任务。此外,我们探索了在动态四振荡的机器人Anymal上实现VT &R的具体挑战。相机没有遵循直线路径(由于行走的长和宏图性),因此无法在重复步骤中完成精确的路径。我们的实验特征是向前和向后方的立摄像头摄像头,显示室内和户外情景中VT&R的性能。我们比较了在重复步骤中执行的机器人的轨迹轨迹,以显示平均不到10厘米的典型跟踪精确度。摄像头AY:在一般的图像上展示了四摄像头上如何显示AY。