In this work we propose a holistic framework for autonomous aerial inspection tasks, using semantically-aware, yet, computationally efficient planning and mapping algorithms. The system leverages state-of-the-art receding horizon exploration techniques for next-best-view (NBV) planning with geometric and semantic segmentation information provided by state-of-the-art deep convolutional neural networks (DCNNs), with the goal of enriching environment representations. The contributions of this article are threefold, first we propose an efficient sensor observation model, and a reward function that encodes the expected information gains from the observations taken from specific view points. Second, we extend the reward function to incorporate not only geometric but also semantic probabilistic information, provided by a DCNN for semantic segmentation that operates in real-time. The incorporation of semantic information in the environment representation allows biasing exploration towards specific objects, while ignoring task-irrelevant ones during planning. Finally, we employ our approaches in an autonomous drone shipyard inspection task. A set of simulations in realistic scenarios demonstrate the efficacy and efficiency of the proposed framework when compared with the state-of-the-art.
翻译:在这项工作中,我们提出一个自主空中检查任务的整体框架,使用精密的觉悟,然而,在计算上却是高效的规划和绘图算法。这个系统利用最先进的地平线探索技术,进行次最佳视(NBV)规划,提供由最先进的深相神经神经网络(DCNN)提供的几何和语静分解信息,目的是丰富环境表现。这一条的贡献有三重,首先,我们提出一个高效的传感器观测模型和一种奖励功能,将从从特定观察点取得的观测中获得的预期信息收益编码起来。第二,我们扩大奖励功能的范围,不仅包括几何学,而且还包括由DCNN为实时运行的语语态分解提供的一种语调信息。将语音信息纳入环境代表中,可以偏向特定物体进行勘探,同时忽略规划过程中与任务有关的内容。最后,我们采用了一种自主的无人驾驶飞机船坞检查任务。一套现实情景模拟,表明拟议框架与州比较时的功效和效率。