Reinforcement learning (RL) in low-data and risk-sensitive domains requires performant and flexible deployment policies that can readily incorporate constraints during deployment. One such class of policies are the semi-parametric H-step lookahead policies, which select actions using trajectory optimization over a dynamics model for a fixed horizon with a terminal value function. In this work, we investigate a novel instantiation of H-step lookahead with a learned model and a terminal value function learned by a model-free off-policy algorithm, named Learning Off-Policy with Online Planning (LOOP). We provide a theoretical analysis of this method, suggesting a tradeoff between model errors and value function errors and empirically demonstrate this tradeoff to be beneficial in deep reinforcement learning. Furthermore, we identify the "Actor Divergence" issue in this framework and propose Actor Regularized Control (ARC), a modified trajectory optimization procedure. We evaluate our method on a set of robotic tasks for Offline and Online RL and demonstrate improved performance. We also show the flexibility of LOOP to incorporate safety constraints during deployment with a set of navigation environments. We demonstrate that LOOP is a desirable framework for robotics applications based on its strong performance in various important RL settings. Project video and details can be found at https://hari-sikchi.github.io/loop .
翻译:低数据和风险敏感领域的强化学习(RL)要求执行灵活灵活的部署政策,在部署期间可以很容易地纳入限制因素。这类政策之一是半参数H-step外观政策,在固定地平线的动态模型上选择使用轨迹优化的动态模型行动,具有终端值功能。在这项工作中,我们调查H-step外观新颖的即时反应,以学习模型和通过无模式的离政策算法学习的终端价值函数,名为“在线规划学习”(LOOP),我们对这一方法进行理论分析,建议在模型错误和价值功能错误之间进行权衡,并用经验来证明这种权衡有利于深加固学习。此外,我们查明了这个框架中的“Actor dvergence”问题,并提议了“ARC”调整轨迹优化程序(ARC)。我们评估了我们关于离线和在线RL的一系列机器人任务的方法,并展示了更好的业绩。我们还展示了LOOP在部署期间将安全限制与一套导航环境结合起来的灵活性。我们证明,LOP是LOOP/AVios的重要视频应用框架是建立在各种操作上。