Navigation using only one marker, which contains four artificial features, is a challenging task since camera pose estimation using only four coplanar points suffers from the rotational ambiguity problem in a real-world application. This paper presents a framework of vision-based navigation for a self-driving vehicle equipped with multiple cameras and a wheel odometer. A multiple camera setup is presented for the camera cluster which has 360-degree vision such that our framework solely requires one planar marker. A Kalman-Filter-based fusion method is introduced for the multiple-camera and wheel odometry. Furthermore, an algorithm is proposed to resolve the rotational ambiguity problem using the prediction of the Kalman Filter as additional information. Finally, the lateral and longitudinal controllers are provided. Experiments are conducted to illustrate the effectiveness of the theory.
翻译:仅使用一个带有四个人工特征的导航标记是一项具有挑战性的任务,因为相机仅使用四个相平面点进行估计,而实际应用中的旋转模糊问题则有四个相平面点。本文为装有多相机和轮式水分计的自驾驶飞行器提供了一个基于愿景的导航框架。为具有360度视觉的相机集束提供了多部摄像装置,这样我们的框架就只需要一个平面标记。为多镜头和轮式轨道测量采用了基于Kalman-Filter的聚合法。此外,还提出了一种算法,用Kalman过滤器的预测作为补充信息来解决旋转模糊问题。最后,提供了横向和纵向控制器。还进行了实验,以说明理论的有效性。