Model-based reinforcement learning (RL) is more sample efficient than model-free RL by using imaginary trajectories generated by the learned dynamics model. When the model is inaccurate or biased, imaginary trajectories may be deleterious for training the action-value and policy functions. To alleviate such problem, this paper proposes to adaptively reweight the imaginary transitions, so as to reduce the negative effects of poorly generated trajectories. More specifically, we evaluate the effect of an imaginary transition by calculating the change of the loss computed on the real samples when we use the transition to train the action-value and policy functions. Based on this evaluation criterion, we construct the idea of reweighting each imaginary transition by a well-designed meta-gradient algorithm. Extensive experimental results demonstrate that our method outperforms state-of-the-art model-based and model-free RL algorithms on multiple tasks. Visualization of our changing weights further validates the necessity of utilizing reweight scheme.
翻译:基于模型的强化学习(RL)比没有模型的RL更有效。当模型不准确或偏差时,想象轨迹可能会对行动价值和政策功能的培训有害。为了缓解这一问题,本文件建议对假想的转变进行适应性重估,以减少低生成轨迹的负面影响。更具体地说,我们通过计算在利用转型来培训行动价值和政策功能时实际样本计算的损失变化来评估假想的过渡的影响。我们根据这一评价标准,我们通过设计完善的元等级算法来构思每个假想过渡的加权概念。广泛的实验结果表明,我们的方法在多项任务上超过了基于模型的状态和无模型的RL算法。我们变换的重量的可视化进一步验证了使用再加权办法的必要性。