We study exploration using randomized value functions in Thompson Sampling (TS)-like algorithms in reinforcement learning. This type of algorithms enjoys appealing empirical performance. We show that when we use 1) a single random seed in each episode, and 2) a Bernstein-type magnitude of noise, we obtain a worst-case $\widetilde{O}\left(H\sqrt{SAT}\right)$ regret bound for episodic time-inhomogeneous Markov Decision Process where $S$ is the size of state space, $A$ is the size of action space, $H$ is the planning horizon and $T$ is the number of interactions. This bound polynomially improves all existing bounds for TS-like algorithms based on randomized value functions, and for the first time, matches the $\Omega\left(H\sqrt{SAT}\right)$ lower bound up to logarithmic factors. Our result highlights that randomized exploration can be near-optimal, which was previously only achieved by optimistic algorithms.
翻译:我们在Thompson抽样(TS)类算法中研究在强化学习中使用随机值函数的探索。这种算法具有吸引人的实证性能。我们显示,当我们使用1个单随机种子的每集和2个Bernstein型的噪音时,我们获得了最差的 $\ lobilde{O ⁇ left (H\qrt{SAT ⁇ right)$, 最差的 $- left (H\ qrt{ SAT ⁇ right), 用于偶发时间- 异质的 Markov 决策程序, 美元是国家空间的大小, $A$A$ 是行动空间的大小, $H$ 是规划地平线, $T$ 是互动的数量。 将TS类算法的所有现有边际范围按随机值函数进行, 第一次, 匹配 $\\\\\ omerga\left (H\qrt{SAT ⁇ right) 的下限值。我们的结果突出表明, 随机勘探可能是近为最优化的算法。