This paper addresses outdoor terrain mapping using overhead images obtained from an unmanned aerial vehicle. Dense depth estimation from aerial images during flight is challenging. While feature-based localization and mapping techniques can deliver real-time odometry and sparse points reconstruction, a dense environment model is generally recovered offline with significant computation and storage. This paper develops a joint 2D-3D learning approach to reconstruct local meshes at each camera keyframe, which can be assembled into a global environment model. Each local mesh is initialized from sparse depth measurements. We associate image features with the mesh vertices through camera projection and apply graph convolution to refine the mesh vertices based on joint 2-D reprojected depth and 3-D mesh supervision. Quantitative and qualitative evaluations using real aerial images show the potential of our method to support environmental monitoring and surveillance applications.
翻译:本文讨论利用从无人驾驶飞行器获得的高空图像绘制室外地形图。飞行期间空中图像的高度深度估计具有挑战性。虽然基于地貌的定位和绘图技术可以提供实时的odo测量和稀有点重建,但通常会通过大量计算和储存从网上回收密集环境模型。本文开发了一种2D-3D联合学习方法,以重建每个摄像机键盘的本地网目,这些网目可以组合成全球环境模型。每个本地网目是从稀薄的深度测量开始的。我们通过摄像头投射将图像特征与网状的脊椎联系起来,并采用图解组合来完善基于2D重新联合预测深度和3D-Mes监督的网目脊。使用真实的航空图像进行定量和定性评估,显示了我们支持环境监测和监视应用的方法的潜力。