We study infinite-horizon average-reward reinforcement learning (RL) for Lipschitz MDPs and develop an algorithm PZRL that discretizes the state-action space adaptively and zooms in to promising regions of the "policy space" which seems to yield high average rewards. We show that the regret of PZRL can be bounded as $\tilde{\mathcal{O}}\big(T^{1 - d_{\text{eff.}}^{-1}}\big)$, where $d_{\text{eff.}}= 2d_\mathcal{S} + d^\Phi_z+2$, $d_\mathcal{S}$ is the dimension of the state space, and $d^\Phi_z$ is the zooming dimension. $d^\Phi_z$ is a problem-dependent quantity that depends not only on the underlying MDP but also the class of policies $\Phi$ used by the agent, which allows us to conclude that if the agent apriori knows that optimal policy belongs to a low-complexity class (that has small $d^\Phi_z$), then its regret will be small. The current work shows how to capture adaptivity gains for infinite-horizon average-reward RL in terms of $d^\Phi_z$. We note that the preexisting notions of zooming dimension are adept at handling only the episodic RL case since zooming dimension approaches covering dimension of state-action space as $T\to\infty$ and hence do not yield any possible adaptivity gains. Several experiments are conducted to evaluate the performance of PZRL. PZRL outperforms other state-of-the-art algorithms; this clearly demonstrates the gains arising due to adaptivity.
翻译:暂无翻译