This paper presents a technique for navigation of mobile robot with Deep Q-Network (DQN) combined with Gated Recurrent Unit (GRU). The DQN integrated with the GRU allows action skipping for improved navigation performance. This technique aims at efficient navigation of mobile robot such as autonomous parking robot. Framework for reinforcement learning can be applied to the DQN combined with the GRU in a real environment, which can be modeled by the Partially Observable Markov Decision Process (POMDP). By allowing action skipping, the ability of the DQN combined with the GRU in learning key-action can be improved. The proposed algorithm is applied to explore the feasibility of solution in real environment by the ROS-Gazebo simulator, and the simulation results show that the proposed algorithm achieves improved performance in navigation and collision avoidance as compared to the results obtained by DQN alone and DQN combined with GRU without allowing action skipping.
翻译:本文介绍了使用深QNetwork(DQN)与Ged 经常性单元(GRU)相结合的移动机器人导航技术。DQN与GRU合并后,可以采取行动跳过导航性能的改进。该技术的目的是对自动停车机器人等移动机器人进行有效导航。强化学习框架可以适用于DQN与GRU在真实环境中结合的GRU,这种环境可以由部分可观测的Markov决定程序(POMDP)模拟。允许跳过行动,可以提高DQN与GRU在学习关键行动方面的能力。拟议的算法用于探索ROS-Gazebo模拟器在实际环境中解决方案的可行性,模拟结果显示,与仅由DQN和DQN与GRU合并的结果相比,拟议的算法可以在不允许跳过行动的情况下提高导航和避免碰撞的性能。