The PhD thesis of Maillard (2013) presents a randomized algorithm for the $K$-armed bandit problem. This less-known algorithm, which we call Maillard sampling (MS), computes the probability of choosing each arm in a closed form, which is useful for counterfactual evaluation from bandit-logged data but was lacking from Thompson sampling, a widely-adopted bandit algorithm in the industry. Motivated by such merit, we revisit MS and perform an improved analysis to show that it achieves both the asymptotical optimality and $\sqrt{KT\log{T}}$ minimax regret bound where $T$ is the time horizon, which matches the standard asymptotically optimal UCB's performance. We then propose a variant of MS called MS$^+$ that improves its minimax bound to $\sqrt{KT\log{K}}$ without losing the asymptotic optimality. MS$^+$ can also be tuned to be aggressive (i.e., less exploration) without losing theoretical guarantees, a unique feature unavailable from existing bandit algorithms. Our numerical evaluation shows the effectiveness of MS$^+$.
翻译:Maillard的博士论文(2013年) Maillard 的博士论文为武装匪徒问题提供了一种随机的算法。 这种不为人知的算法,我们称之为Maillard 抽样(MS),我们称之为Maillard 抽样(MS),用来计算以封闭形式选择每个手臂的概率,这对从土匪调查数据中反事实评估有用,但却缺乏Thompson 抽样(Thompson),这是行业中广泛采用的一种强盗算法。我们受此优点的驱动,重新审视MS,并进行更好的分析,以显示它既能达到无症状的最佳性,又能达到$@sqrt{KT\log{T\ ⁇ $迷你马克斯的负负负负负负负负负负负负,而不会失去理论保证(即较少探索),这与UCB的实绩相符。我们随后提出了一种称为MS$$(M)的变式模式,它能显示我们的数字评估。