In this paper, we present a goal-driven autonomous mapping and exploration system that combines reactive and planned robot navigation. First, a navigation policy is learned through a deep reinforcement learning (DRL) framework in a simulated environment. This policy guides an autonomous agent towards a goal while avoiding obstacles. We develop a navigation system where this learned policy is integrated into a motion planning stack as the local navigation layer to move the robot towards the intermediate goals. A global path planner is used to mitigate the local optimum problem and guide the robot towards the global goal. Possible intermediate goal locations are extracted from the environment and used as local goals according to the navigation system heuristics. The fully autonomous navigation is performed without any prior knowledge while mapping is performed as the robot moves through the environment. Experiments show the capability of the system navigating in previously unknown surroundings and arriving at the designated goal.
翻译:在本文中,我们提出了一个目标驱动的自动绘图和勘探系统,将反应式和计划式机器人导航结合起来。首先,通过模拟环境中的深强化学习(DRL)框架学习导航政策。该政策引导一个自主代理实现一个目标,同时避免障碍。我们开发了一个导航系统,将这一学习的政策作为当地导航层纳入一个运动规划堆,将机器人推向中间目标。使用一个全球路径规划器来缓解当地的最佳问题,引导机器人走向全球目标。根据导航系统疲劳学,从环境中提取可能的中间目标位置,并用作当地目标。完全自主的导航是在没有任何知识的情况下进行的,而绘图则是随着机器人在环境中移动而进行。实验显示系统在以前未知的周围航行的能力,并到达指定的目标。