In this paper, we propose a novel reinforcement learning (RL) based path generation (RL-PG) approach for mobile robot navigation without a prior exploration of an unknown environment. Multiple predictive path points are dynamically generated by a deep Markov model optimized using RL approach for robot to track. To ensure the safety when tracking the predictive points, the robot's motion is fine-tuned by a motion fine-tuning module. Such an approach, using the deep Markov model with RL algorithm for planning, focuses on the relationship between adjacent path points. We analyze the benefits that our proposed approach are more effective and are with higher success rate than RL-Based approach DWA-RL and a traditional navigation approach APF. We deploy our model on both simulation and physical platforms and demonstrate our model performs robot navigation effectively and safely.
翻译:在本文中,我们提议在不事先探索未知环境的情况下,对移动机器人导航采用新型强化学习(RL-PG)路径生成法(RL-PG)新颖的强化学习(RL)路径生成法(RL-PG) 。 多个预测路径点是由利用RL优化的机器人跟踪的深Markov模型动态生成的。 为确保在跟踪预测点时的安全性,机器人运动由运动微调模块进行微调。 这种方法使用带有RL算法的深Markov模型进行规划,侧重于相邻路径点之间的关系。 我们分析了我们拟议方法的效益,即比基于RL的DWA-RL方法以及传统的导航方法(APF)更有效、成功率更高。 我们在模拟和物理平台上部署我们的模型,并展示我们模型有效和安全地运行机器人导航。