The combination of a small unmanned ground vehicle (UGV) and a large unmanned carrier vehicle allows more flexibility in real applications such as rescue in dangerous scenarios. The autonomous recovery system, which is used to guide the small UGV back to the carrier vehicle, is an essential component to achieve a seamless combination of the two vehicles. This paper proposes a novel autonomous recovery framework with a low-cost monocular vision system to provide accurate positioning and attitude estimation of the UGV during navigation. First, we introduce a light-weight convolutional neural network called UGV-KPNet to detect the keypoints of the small UGV from the images captured by a monocular camera. UGV-KPNet is computationally efficient with a small number of parameters and provides pixel-level accurate keypoints detection results in real-time. Then, six degrees of freedom pose is estimated using the detected keypoints to obtain positioning and attitude information of the UGV. Besides, we are the first to create a large-scale real-world keypoints dataset of the UGV. The experimental results demonstrate that the proposed system achieves state-of-the-art performance in terms of both accuracy and speed on UGV keypoint detection, and can further boost the 6-DoF pose estimation for the UGV.
翻译:小型无人驾驶地面飞行器(UGV)和大型无人驾驶运载火箭的组合使得在危险情况下的救援等实际应用中具有更大的灵活性。自动回收系统用来引导小型无人驾驶地面飞行器返回承运人车辆,是实现两辆车辆无缝组合的一个基本组成部分。本文提出一个新的自主恢复框架,其中采用低成本单视系统,对导航期间的无人驾驶地面飞行器提供准确定位和姿态估计。首先,我们引入了一个称为UGV-KPNet的轻量级神经神经网络,以探测从单层相机摄取的图像中发现的小型无人驾驶空中飞行器的关键点。UGV-KPNet在计算上效率很高,使用少量参数提供像素级的准确关键点实时检测结果。然后,利用所探测到的临界点估计了六度的自由面,以获取UGV的定位和姿态信息。此外,我们首先创建了一个大型真实世界关键点数据集。实验结果显示,拟议的系统在UVVV的状态和推进度方面都能够达到6G的精确度,从而进一步测量和推进UG的定位。