We study learning algorithms for the classical Markovian bandit problem with discount. We explain how to adapt PSRL [24] and UCRL2 [2] to exploit the problem structure. These variants are called MB-PSRL and MB-UCRL2. While the regret bound and runtime of vanilla implementations of PSRL and UCRL2 are exponential in the number of bandits, we show that the episodic regret of MB-PSRL and MB-UCRL2 is $\tilde O(S\sqrt{nK})$ where $K$ is the number of episodes, n is the number of bandits and S is the number of states of each bandit (the exact bound in $S$, $n$ and $K$ is given in the paper). Up to a factor $\sqrt S$, this matches the lower bound of $\Omega(\sqrt{SnK}$) that we also derive in the paper. MB-PSRL is also computationally efficient: its runtime is linear in the number of bandits. We further show that this linear runtime cannot be achieved by adapting classical non-Bayesian algorithms such as UCRL2 or UCBVI to Markovian bandit problems. Finally, we perform numerical experiments that confirm that MB-PSRL outperforms other existing algorithms in practice, both in terms of regret and of computation time.
翻译:我们用折扣来研究古典Markovian盗匪问题的算法。 我们解释如何调整PSRL[ 24]和UCRL2[2], 以利用问题结构。 这些变式被称为MB-PSRL和MB-UCRL2。 虽然PSRL和UCRL2的香草执行的遗憾约束和运行时间在盗匪数量上成倍上升,但我们显示,MB-PSRL和MB-UCRL2的同比差为$\Omega(sqrt{SnK})和MB-UCRL2的同比差是美元。 MB-PSRL的计算效率也很高:它的运行时间在土匪数量中是线性线性线性,S是每条国家的数目(本文中给出了以美元、美元和美元的确切约束值为美元、美元和美元),但从一个系数上,这与我们在纸上得出的美元(sqrrr{SL)和MBS-ral-ralalx的演算算算法中也证实了它运行过程中的不为我们所实现的“BABAR2”的不象-ral-ral-ral-ral-ral-ral-ral-trax。