This paper addresses the problem of developing an algorithm for autonomous ship landing of vertical take-off and landing (VTOL) capable unmanned aerial vehicles (UAVs), using only a monocular camera in the UAV for tracking and localization. Ship landing is a challenging task due to the small landing space, six degrees of freedom ship deck motion, limited visual references for localization, and adversarial environmental conditions such as wind gusts. We first develop a computer vision algorithm which estimates the relative position of the UAV with respect to a horizon reference bar on the landing platform using the image stream from a monocular vision camera on the UAV. Our approach is motivated by the actual ship landing procedure followed by the Navy helicopter pilots in tracking the horizon reference bar as a visual cue. We then develop a robust reinforcement learning (RL) algorithm for controlling the UAV towards the landing platform even in the presence of adversarial environmental conditions such as wind gusts. We demonstrate the superior performance of our algorithm compared to a benchmark nonlinear PID control approach, both in the simulation experiments using the Gazebo environment and in the real-world setting using a Parrot ANAFI quad-rotor and sub-scale ship platform undergoing 6 degrees of freedom (DOF) deck motion.
翻译:本文论述为垂直起飞和着陆(VTOL)能力强的无人驾驶飞行器(无人驾驶飞行器)自动上岸自动船舶自动着陆制定算法的问题,该算法仅使用无人驾驶航空器的单子照相机进行跟踪和定位,船舶着陆是一项具有挑战性的任务,因为着陆空间小,船舶甲板自由运动6度,当地化的视觉参考有限,以及风螺等对抗性环境条件等环境条件。我们首先开发计算机远景算法,利用无人驾驶航空器上单视像摄像机的图像流,估计无人驾驶航空器在着陆平台上对地平面参照栏的相对位置。我们的方法受到海军直升机飞行员跟踪地平线参考栏的实际船舶着陆程序作为视觉提示的驱动。然后我们开发了强有力的强化学习(RL)算法,以控制无人驾驶飞行器进入着陆平台的着陆平台,即使存在风螺旋等对抗性环境条件。我们在使用加泽博环境的模拟实验中,以及在使用正在运行的APRAF级自由平台(PAR-DA)底级平台模拟试验和现实地表(PAR-DAF-DO)进行自由平台的模拟试验时,我们算的算的算的算上,我们算算算比比比比比标准/DAL-DAL-DAL-DF-DGAL-D-D-D-D-D-D-D-D-D-S-S-S-S-S-S-S-S-S-S-S-S-S-S级模拟试验中,我们)的模拟试验中,我们的模拟试验中,我们展示的模拟试验中,我们的比标准平台上,我们的比都/D-D-D-D-D-PD-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-