Inverse reinforcement learning (IRL) seeks to infer a cost function that explains the underlying goals and preferences of expert demonstrations. This paper presents receding horizon inverse reinforcement learning (RHIRL), a new IRL algorithm for high-dimensional, noisy, continuous systems with black-box dynamic models. RHIRL addresses two key challenges of IRL: scalability and robustness. To handle high-dimensional continuous systems, RHIRL matches the induced optimal trajectories with expert demonstrations locally in a receding horizon manner and 'stitches' together the local solutions to learn the cost; it thereby avoids the 'curse of dimensionality'. This contrasts sharply with earlier algorithms that match with expert demonstrations globally over the entire high-dimensional state space. To be robust against imperfect expert demonstrations and control noise, RHIRL learns a state-dependent cost function 'disentangled' from system dynamics under mild conditions. Experiments on benchmark tasks show that RHIRL outperforms several leading IRL algorithms in most instances. We also prove that the cumulative error of RHIRL grows linearly with the task duration.
翻译:反向强化学习( IRL) 试图推断出一种成本函数, 解释专家演示的基本目标和偏好。 本文展示了退缩的地平线反向强化学习( RHIRL), 这是一种新的IRL 算法, 用于使用黑盒动态模型的高维、 噪音、 连续系统。 RHIRL 应对IR的两大挑战: 伸缩性和稳健性。 要处理高维连续系统, RHIRL 将引导的最佳轨迹与当地专家演示相匹配, 其形式是递减的地平线和“ 缝隙 ”, 以及学习成本的本地解决方案; 从而避免了“ 维度的诅咒 ” 。 这与早期的算法截然不同, 与全球高维空间的专家演示相匹配。 要抵御不完善的专家演示和控制噪音, RHIRL 学习一个在温和条件下与系统动态“ 分离” 的状态成本函数。 对基准任务进行的实验表明, RHIRL 在多数情况下, 超越了几个主要的 IRL 。 我们还证明 RL 的累积错误与任务线性长线长。