We consider reinforcement learning in an environment modeled by an episodic, finite, stage-dependent Markov decision process of horizon $H$ with $S$ states, and $A$ actions. The performance of an agent is measured by the regret after interacting with the environment for $T$ episodes. We propose an optimistic posterior sampling algorithm for reinforcement learning (OPSRL), a simple variant of posterior sampling that only needs a number of posterior samples logarithmic in $H$, $S$, $A$, and $T$ per state-action pair. For OPSRL we guarantee a high-probability regret bound of order at most $\widetilde{\mathcal{O}}(\sqrt{H^3SAT})$ ignoring $\text{poly}\log(HSAT)$ terms. The key novel technical ingredient is a new sharp anti-concentration inequality for linear forms which may be of independent interest. Specifically, we extend the normal approximation-based lower bound for Beta distributions by Alfers and Dinges [1984] to Dirichlet distributions. Our bound matches the lower bound of order $\Omega(\sqrt{H^3SAT})$, thereby answering the open problems raised by Agrawal and Jia [2017b] for the episodic setting.
翻译:我们考虑在一种环境里加强学习,这种环境的模型是:以美元和美元为地平线的固定、有限、依赖阶段的Markov决定程序,用美元和美元来计算。对于OPSRL来说,我们保证在最大程度上以美元和美元与环境互动后以遗憾来衡量代理人的性能。我们提议了一种乐观的后表取样算法,用于强化学习(OPSRL),这是后表取样的简单变式,它只需要以美元、美元、美元、美元和每对州行动以美元计算。对于OPSRL来说,我们保证在最大程度上以美元(sqrt{H3SAT})与环境互动后,对秩序的高度概率感应感应感应感应感应。我们据此将Afers和Dinges[1984]对Beta分配的正常近似值较低约束范围扩大到Dirichteld=Dirichtal=Dirichulet 分配。