In meta reinforcement learning (meta RL), an agent learns from a set of training tasks how to quickly solve a new task, drawn from the same task distribution. The optimal meta RL policy, a.k.a. the Bayes-optimal behavior, is well defined, and guarantees optimal reward in expectation, taken with respect to the task distribution. The question we explore in this work is how many training tasks are required to guarantee approximately optimal behavior with high probability. Recent work provided the first such PAC analysis for a model-free setting, where a history-dependent policy was learned from the training tasks. In this work, we propose a different approach: directly learn the task distribution, using density estimation techniques, and then train a policy on the learned task distribution. We show that our approach leads to bounds that depend on the dimension of the task distribution. In particular, in settings where the task distribution lies in a low-dimensional manifold, we extend our analysis to use dimensionality reduction techniques and account for such structure, obtaining significantly better bounds than previous work, which strictly depend on the number of states and actions. The key of our approach is the regularization implied by the kernel density estimation method. We further demonstrate that this regularization is useful in practice, when `plugged in' the state-of-the-art VariBAD meta RL algorithm.
翻译:在元强化学习(meta RL)中,一个代理商从一组培训任务中学习如何迅速解决从同一任务分布中抽取的新任务。最佳的元RL政策(a.k.a.a.bays-optimal)是定义明确的,保证对任务分布的预期最佳回报。我们在这项工作中探讨的问题是,需要多少培训任务来保障高度概率的大致最佳行为。最近的工作为无模式环境提供了第一个这样的PAC分析,该模式从培训任务中吸取了依赖历史的政策。在这项工作中,我们提出了一种不同的方法:直接学习任务分布,使用密度估计技术,然后对学习的任务分配进行政策培训。我们表明,我们的方法引领着取决于任务分布层面的界限。特别是在任务分布位于低维的方方面的情况下,我们扩大我们的分析范围,为这种结构使用维度减少技术和核算,获得比以往工作要好得多的界限,这完全取决于国家和行动的数量。我们的方法的关键是,在不断升级的REDL 模型估算时,我们的方法的正规化意味着,我们在不断升级的RADL 方法中,我们用的是常规的方法。