Exploration in reinforcement learning is a challenging problem: in the worst case, the agent must search for reward states that could be hidden anywhere in the state space. Can we define a more tractable class of RL problems, where the agent is provided with examples of successful outcomes? In this problem setting, the reward function can be obtained automatically by training a classifier to categorize states as successful or not. If trained properly, such a classifier can not only afford a reward function, but actually provide a well-shaped objective landscape that both promotes progress toward good states and provides a calibrated exploration bonus. In this work, we we show that an uncertainty aware classifier can solve challenging reinforcement learning problems by both encouraging exploration and provided directed guidance towards positive outcomes. We propose a novel mechanism for obtaining these calibrated, uncertainty-aware classifiers based on an amortized technique for computing the normalized maximum likelihood (NML) distribution, also showing how these techniques can be made computationally tractable by leveraging tools from meta-learning. We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions, while also providing more effective guidance towards the goal. We demonstrate that our algorithm solves a number of challenging navigation and robotic manipulation tasks which prove difficult or impossible for prior methods.
翻译:强化学习的探索是一个具有挑战性的问题:在最糟糕的情况下,代理人必须寻找可能隐藏在州空间任何地方的奖赏国家。我们能否定义更易处理的RL问题类别,为代理人提供成功结果的范例?在这一问题设置中,奖励功能可以通过培训一个分类者将国家分类成成功或不成功分类而自动获得。如果经过适当培训,这样的分类者不仅可以支付奖励功能,而且实际上可以提供一个既能促进向好州进步又能提供校准勘探奖金的完美客观景观。在这项工作中,我们发现一个了解不确定性的分类者既能鼓励探索,又能为取得积极结果提供定向指导,从而解决强化学习的问题。我们提出一个新机制,在计算正常最大可能性(NML)分布的分级技术基础上,获得这些经过校准的、有不确定性的分类者,可以自动获得奖赏功能。还表明这些技术如何通过利用元学习工具进行计算,从而实现计算。我们所得出的算算算算算法与基于数的探索方法和先前的算法之间的一些棘手的强化学习奖赏功能的连接,同时,我们还可以提出一个新机制,我们还要证明一个难以实现的、更难的逻辑。