Deep reinforcement learning (DRL) algorithms have proven effective in robot navigation, especially in unknown environments, through directly mapping perception inputs into robot control commands. Most existing methods adopt uniform execution duration with robots taking commands at fixed intervals. As such, the length of execution duration becomes a crucial parameter to the navigation algorithm. In particular, if the duration is too short, then the navigation policy would be executed at a high frequency, with increased training difficulty and high computational cost. Meanwhile, if the duration is too long, then the policy becomes unable to handle complex situations, like those with crowded obstacles. It is thus tricky to find the "sweet" duration range; some duration values may render a DRL model to fail to find a navigation path. In this paper, we propose to employ adaptive execution duration to overcome this problem. Specifically, we formulate the navigation task as a Semi-Markov Decision Process (SMDP) problem to handle adaptive execution duration. We also improve the distributed proximal policy optimization (DPPO) algorithm and provide its theoretical guarantee for the specified SMDP problem. We evaluate our approach both in the simulator and on an actual robot. The results show that our approach outperforms the other DRL-based method (with fixed execution duration) by 10.3% in terms of the navigation success rate.
翻译:深度强化学习( DRL) 算法在机器人导航中被证明是有效的, 特别是在未知环境中, 通过将感知输入直接绘图到机器人控制指令中, 特别是在未知环境中。 大多数现有方法都采用与机器人使用固定间隔命令的统一执行时间。 因此, 执行时间的长度成为导航算法的一个关键参数 。 特别是, 如果时间太短, 那么导航政策就会以高频率执行, 培训难度增加, 计算成本高 。 同时, 如果时间太长, 政策将无法处理复杂的情况, 比如那些有拥挤障碍的人。 因此很难找到“ weet” 持续时间范围; 某些持续时间值可能会使 DRL 模型无法找到导航路径 。 在本文中, 我们建议使用适应性执行时间来克服这一问题。 具体地说, 我们把导航任务设计成一个半Markov 决策程序( SMDP) 问题来处理适应性执行时间的问题。 我们还改进分布式准政策优化算法(DPO) 算法, 并为指定的 SMDP 问题提供理论保证。 因此我们很难找到“ sweet” 时间范围; 一些 DL 模式可能使得我们的方法无法找到一个导航路径 。 。 我们用一个固定的 Rad 方法 。