Model-based deep reinforcement learning has achieved success in various domains that require high sample efficiencies, such as Go and robotics. However, there are some remaining issues, such as planning efficient explorations to learn more accurate dynamic models, evaluating the uncertainty of the learned models, and more rational utilization of models. To mitigate these issues, we present MEEE, a model-ensemble method that consists of optimistic exploration and weighted exploitation. During exploration, unlike prior methods directly selecting the optimal action that maximizes the expected accumulative return, our agent first generates a set of action candidates and then seeks out the optimal action that takes both expected return and future observation novelty into account. During exploitation, different discounted weights are assigned to imagined transition tuples according to their model uncertainty respectively, which will prevent model predictive error propagation in agent training. Experiments on several challenging continuous control benchmark tasks demonstrated that our approach outperforms other model-free and model-based state-of-the-art methods, especially in sample complexity.
翻译:在需要高采样效率的各个领域,例如Go和机器人,基于模型的深层强化学习取得了成功;然而,还存在一些问题,如规划高效探索以学习更准确的动态模型、评估所学模型的不确定性和更合理地利用模型。为了缓解这些问题,我们介绍了由乐观探索和加权开发组成的模型组合方法MEE,这是一种模式组合方法。在勘探期间,与以前直接选择最佳行动以最大限度地实现预期的累积回报的方法不同,我们的代理人首先产生了一套行动候选人,然后寻找既考虑到预期的回报又考虑到未来观察的新颖之处的最佳行动。在开发期间,根据模型不确定性,将不同的折扣权重分别分配给想象的过渡图例,以防止模型预测在剂培训中传播错误。关于若干具有挑战性的连续控制基准任务的实验表明,我们的方法优于其他无模型和基于模型的状态方法,特别是在抽样复杂性方面。