This paper proposes a novel scoring function for the planning module of MPC-based model-based reinforcement learning methods to address the inherent bias of using the reward function to score trajectories. The proposed method enhances the learning efficiency of existing MPC-based MBRL methods using the discounted sum of values. The method utilizes optimal trajectories to guide policy learning and updates its state-action value function based on real-world and augmented on-board data. The learning efficiency of the proposed method is evaluated in selected MuJoCo Gym environments as well as in learning locomotion skills for a simulated model of the Cassie robot. The results demonstrate that the proposed method outperforms the current state-of-the-art algorithms in terms of learning efficiency and average reward return.
翻译:本文为基于多氯联苯的基于模型的强化学习方法规划模块提出了一个新的评分功能,以解决使用奖励功能对轨迹计分的内在偏差。拟议方法提高了基于多氯联苯的现有MBRL方法使用折扣数值总和的学习效率。该方法利用最佳轨迹指导政策学习,并根据现实世界和增强的机载数据更新其州-行动价值功能。在选定的MuJoCo Gym环境中,以及在学习卡西机器人模拟模型的移动技能方面,对拟议方法的学习效率进行了评估。结果显示,该拟议方法在学习效率和平均回报方面超过了目前最先进的算法。