Gradient-based Meta-RL (GMRL) refers to methods that maintain two-level optimisation procedures wherein the outer-loop meta-learner guides the inner-loop gradient-based reinforcement learner to achieve fast adaptations. In this paper, we develop a unified framework that describes variations of GMRL algorithms and points out that existing stochastic meta-gradient estimators adopted by GMRL are actually \textbf{biased}. Such meta-gradient bias comes from two sources: 1) the compositional bias incurred by the two-level problem structure, which has an upper bound of $\mathcal{O}\big(K\alpha^{K}\hat{\sigma}_{\text{In}}|\tau|^{-0.5}\big)$ \emph{w.r.t.} inner-loop update step $K$, learning rate $\alpha$, estimate variance $\hat{\sigma}^{2}_{\text{In}}$ and sample size $|\tau|$, and 2) the multi-step Hessian estimation bias $\hat{\Delta}_{H}$ due to the use of autodiff, which has a polynomial impact $\mathcal{O}\big((K-1)(\hat{\Delta}_{H})^{K-1}\big)$ on the meta-gradient bias. We study tabular MDPs empirically and offer quantitative evidence that testifies our theoretical findings on existing stochastic meta-gradient estimators. Furthermore, we conduct experiments on Iterated Prisoner's Dilemma and Atari games to show how other methods such as off-policy learning and low-bias estimator can help fix the gradient bias for GMRL algorithms in general.
翻译:GMRL (GMRL) 指的是维持两级优化程序的方法, 外部环流元 Leaner 引导内环梯度基强化学习者实现快速适应。 在本文中, 我们开发了一个统一框架, 描述 GMRL 算法的变量, 并指出 GMRL 采用的现有随机元增压估测器实际上是 $KK, 学习率 $\ alpha$, 估计美元=gma_%xtile{ 低卡路里值} 。 这种元增压偏差来自两个来源:1) 由二级问题结构引起的构成偏差, 该结构具有 $\ mathalalal=Oibig (K\\\\ khat\\\\\ gma\ text{In ⁇ _L_ 0.5\ big) 的上限 。 在目前 GMQ&H 的 Glastial- disalal 上, 以美元=H_ dal_ disal_ disal a adal ass adal restial assal ass restial restial resisal resisal res res res resism resism resism 。 ial 。