In this work, we present two Deep Reinforcement Learning (Deep-RL) approaches to enhance the problem of mapless navigation for a terrestrial mobile robot. Our methodology focus on comparing a Deep-RL technique based on the Deep Q-Network (DQN) algorithm with a second one based on the Double Deep Q-Network (DDQN) algorithm. We use 24 laser measurement samples and the relative position and angle of the agent to the target as information for our agents, which provide the actions as velocities for our robot. By using a low-dimensional sensing structure of learning, we show that it is possible to train an agent to perform navigation-related tasks and obstacle avoidance without using complex sensing information. The proposed methodology was successfully used in three distinct simulated environments. Overall, it was shown that Double Deep structures further enhance the problem for the navigation of mobile robots when compared to the ones with simple Q structures.
翻译:在这项工作中,我们展示了两种加深地面移动机器人无地图导航问题的方法。我们的方法侧重于比较基于深Q网络算法的深RL技术与基于双深Q网络算法的第二个方法。我们使用24个激光测量样本以及代理人相对于目标的相对位置和角度作为我们代理人的信息,这些样本提供了作为我们机器人速度的动作。通过使用低维的学习感测结构,我们证明可以培训一个代理人来执行与导航有关的任务和避免障碍,而不用复杂的遥感信息。提议的方法成功地在三个不同的模拟环境中使用。总体来看,显示双深结构与简单的Q结构相比,进一步增加了移动机器人的导航问题。