We consider a multi-armed bandit framework where the rewards obtained by pulling different arms are correlated. We develop a unified approach to leverage these reward correlations and present fundamental generalizations of classic bandit algorithms to the correlated setting. We present a unified proof technique to analyze the proposed algorithms. Rigorous analysis of C-UCB (the correlated bandit version of Upper-confidence-bound) reveals that the algorithm ends up pulling certain sub-optimal arms, termed as non-competitive, only O(1) times, as opposed to the O(log T) pulls required by classic bandit algorithms such as UCB, TS etc. We present regret-lower bound and show that when arms are correlated through a latent random source, our algorithms obtain order-optimal regret. We validate the proposed algorithms via experiments on the MovieLens and Goodreads datasets, and show significant improvement over classical bandit algorithms.
翻译:我们考虑一个多武装的匪徒框架,通过拉动不同手臂获得的奖赏是相互关联的。我们开发了一种统一的方法来利用这些奖赏关系,并将经典土匪算法的基本概括化到相关背景中。我们提出了一个分析拟议算法的统一验证技术。对C-UCB(上信任约束下的相关匪徒版)的严格分析显示,该算法最终拉动了某些亚最佳手,被称为非竞争性的,只有O(1)次,而不是UCB、TS等经典土匪算法所要求的O(log T)拉动。我们提出遗憾-降低约束,并表明当武器通过潜在随机来源发生关联时,我们的算法获得有秩序-最佳的遗憾。我们通过电影Lens和Goodread数据集实验验证了拟议的算法,并展示了古典土匪算法的重大改进。