Navigation using only one marker, which contains four artificial features, is a challenging task since camera pose estimation using only four coplanar points suffers from the rotational ambiguity problem in a real-world application. This paper presents a framework of vision-based navigation for a self-driving vehicle equipped with multiple cameras and a wheel odometer. A multiple camera setup is presented for the camera cluster which has $360^{\circ}$ vision such that our framework solely requires one planar marker. A Kalman-Filter-based fusion method is introduced for the multiple-camera and wheel odometry. Furthermore, an algorithm is proposed to resolve the rotational ambiguity problem using the prediction of the Kalman Filter as additional information. Finally, the lateral and longitudinal controllers are provided. Experiments are conducted to illustrate the effectiveness of the theory.
翻译:仅使用一个带有四个人工特征的导航标记是一项具有挑战性的任务,因为相机仅使用四个相平面点进行估计,而实际应用中却存在旋转模糊问题,本文件为配备多相机和轮式仪表仪的自驾驶车辆提供了一个基于愿景的导航框架。为摄像组提供了多部摄像装置,该机组有360 ⁇ circ}$的愿景,因此我们的框架只需要一个平面标记。为多镜头和轮式机的观察测量采用了基于卡尔曼的聚合法。此外,还提出了一种算法,用Kalman过滤器的预测作为补充信息来解决旋转模糊问题。最后,提供了横向和纵向控制器。还进行了实验,以说明理论的有效性。