Model-free reinforcement learning (RL) is a powerful approach for learning control policies directly from high-dimensional state and observation. However, it tends to be data-inefficient, which is especially costly in robotic learning tasks. On the other hand, optimal control does not require data if the system model is known, but cannot scale to models with high-dimensional states and observations. To exploit benefits of both model-free RL and optimal control, we propose time-to-reach-based (TTR-based) reward shaping, an optimal control-inspired technique to alleviate data inefficiency while retaining advantages of model-free RL. This is achieved by summarizing key system model information using a TTR function to greatly speed up the RL process, as shown in our simulation results. The TTR function is defined as the minimum time required to move from any state to the goal under assumed system dynamics constraints. Since the TTR function is computationally intractable for systems with high-dimensional states, we compute it for approximate, lower-dimensional system models that still captures key dynamic behaviors. Our approach can be flexibly and easily incorporated into any model-free RL algorithm without altering the original algorithm structure, and is compatible with any other techniques that may facilitate the RL process. We evaluate our approach on two representative robotic learning tasks and three well-known model-free RL algorithms, and show significant improvements in data efficiency and performance.
翻译:无模型强化学习(RL)是直接从高维状态和观测中学习控制政策的有力方法,但往往缺乏数据效率,在机器人学习任务方面成本特别高。另一方面,如果系统模型为人所知,则最佳控制并不要求数据,但不能扩大到高维状态和观测模式。为了利用无模型强化学习和最佳控制的好处,我们提议采用基于时间到距离(基于TR)的奖励制导(基于TTR)的最佳控制激励技术,以降低数据效率,同时保留无模型RL的优势。这是通过利用TTR功能总结关键系统模型信息,以大大加快RL进程实现的。正如我们模拟结果所示,TTR功能被界定为从任何状态向假设系统动态制约下的目标移动的最短时间。由于TTR功能在计算上难以为高维度系统计算,因此我们为仍然捕捉到关键动态行为的近似、低度系统模型系统模型模型模型模型模型。我们的方法可以灵活和容易地纳入任何无模型的 RL 算法中,我们在不改变原始运算法结构结构中可以灵活地学习任何显著的 R 。