Autonomous navigation of mobile robots is an essential aspect in use cases such as delivery, assistance or logistics. Although traditional planning methods are well integrated into existing navigation systems, they struggle in highly dynamic environments. On the other hand, Deep-Reinforcement-Learning-based methods show superior performance in dynamic obstacle avoidance but are not suitable for long-range navigation and struggle with local minima. In this paper, we propose a Deep-Reinforcement-Learning-based control switch, which has the ability to select between different planning paradigms based solely on sensor data observations. Therefore, we develop an interface to efficiently operate multiple model-based, as well as learning-based local planners and integrate a variety of state-of-the-art planners to be selected by the control switch. Subsequently, we evaluate our approach against each planner individually and found improvements in navigation performance especially for highly dynamic scenarios. Our planner was able to prefer learning-based approaches in situations with a high number of obstacles while relying on the traditional model-based planners in long corridors or empty spaces.
翻译:移动机器人的自主导航是交付、援助或后勤等使用案例的一个基本方面。虽然传统的规划方法已很好地融入现有的导航系统,但它们在高度动态的环境中挣扎。另一方面,基于深力学习的方法在动态障碍的避免方面表现优异,但不适合远程导航和与当地小型机器人的斗争。在本文件中,我们提议了一个深力-基于学习的控制开关,它能够在完全基于传感器数据观测的不同规划模式之间作出选择。因此,我们开发了一个接口,以便高效率地操作多种基于模型的以及基于学习的当地规划人员,并整合将由控制开关选择的各种最先进的规划人员。随后,我们对每个规划人员各自评估我们的方法,发现导航性能的改进,特别是在高度动态的情景下。我们的规划人员能够在存在大量障碍的情况下选择基于学习的方法,同时依赖长走廊或空空空空间的传统基于模型的规划人员。