Learning complex robot behaviors through interaction requires structured exploration. Planning should target interactions with the potential to optimize long-term performance, while only reducing uncertainty where conducive to this objective. This paper presents Latent Optimistic Value Exploration (LOVE), a strategy that enables deep exploration through optimism in the face of uncertain long-term rewards. We combine latent world models with value function estimation to predict infinite-horizon returns and recover associated uncertainty via ensembling. The policy is then trained on an upper confidence bound (UCB) objective to identify and select the interactions most promising to improve long-term performance. We apply LOVE to visual robot control tasks in continuous action spaces and demonstrate on average more than 20% improved sample efficiency in comparison to state-of-the-art and other exploration objectives. In sparse and hard to explore environments we achieve an average improvement of over 30%.
翻译:通过互动学习复杂的机器人行为需要结构化的探索。 规划应该针对与潜力的相互作用,优化长期性能,同时只减少有利于这一目标的不确定性。 本文展示了“ 原始乐观价值探索”(LOVE),这是一个在面对不确定的长期回报时通过乐观主义进行深入探索的战略。 我们结合了潜在的世界模型和价值函数估算,以预测无限正正负回报,并通过组合恢复相关的不确定性。 然后,该政策按照上层信心约束目标(UCB)进行培训,以确定和选择最有可能改善长期性能的相互作用。 我们把爱应用到连续行动空间的视觉机器人控制任务中,并展示出平均超过20%的样本效率,与最先进的和其他探索目标相比。 在稀疏和艰苦的探索环境中,我们实现了超过30%的平均改善率。