In the regret-based formulation of multi-armed bandit (MAB) problems, except in rare instances, much of the literature focuses on arms with i.i.d. rewards. In this paper, we consider the problem of obtaining regret guarantees for MAB problems in which the rewards of each arm form a Markov chain which may not belong to a single parameter exponential family. To achieve logarithmic regret in such problems is not difficult: a variation of standard KL-UCB does the job. However, the constants obtained from such an analysis are poor for the following reason: i.i.d. rewards are a special case of Markov rewards and it is difficult to design an algorithm that works well independent of whether the underlying model is truly Markovian or i.i.d. To overcome this issue, we introduce a novel algorithm that identifies whether the rewards from each arm are truly Markovian or i.i.d. using a Hellinger distance-based test. Our algorithm then switches from using a standard KL-UCB to a specialized version of KL-UCB when it determines that the arm reward is Markovian, thus resulting in low regret for both i.i.d. and Markovian settings.
翻译:在多武装匪徒(MAB)问题的基于遗憾的表述中,除了极少数情况外,许多文献都集中在武器上,以一.d.奖励为特例。在本文中,我们考虑了如何为MAB问题获得遗憾保证的问题,在这些问题上,每只手臂的奖赏形成一个可能不属于单一参数指数式家族的Markov链条。要在这些问题上实现对数的遗憾并不困难:标准KL-UCB的变异工作。然而,从这种分析中获得的常数之所以差,原因如下:一.i.d.奖赏是Markov奖赏的一个特例,很难设计出一种完全独立于基本模型是Markovian还是i.d.的算法。为了克服这一问题,我们采用了一种新奇的算法,确定每只手臂的奖赏是否真正是Markovian或i.d.,使用Hellinger远程测试。我们的算法随后将使用标准KL-UCB的常数转换为KL-UCB的专门版本,当它确定奖赏是Markovian和Markovi的低遗憾。