We consider a finite-armed structured bandit problem in which mean rewards of different arms are known functions of a common hidden parameter $\theta^*$. Since we do not place any restrictions of these functions, the problem setting subsumes several previously studied frameworks that assume linear or invertible reward functions. We propose a novel approach to gradually estimate the hidden $\theta^*$ and use the estimate together with the mean reward functions to substantially reduce exploration of sub-optimal arms. This approach enables us to fundamentally generalize any classic bandit algorithm including UCB and Thompson Sampling to the structured bandit setting. We prove via regret analysis that our proposed UCB-C algorithm (structured bandit versions of UCB) pulls only a subset of the sub-optimal arms $O(\log T)$ times while the other sub-optimal arms (referred to as non-competitive arms) are pulled $O(1)$ times. As a result, in cases where all sub-optimal arms are non-competitive, which can happen in many practical scenarios, the proposed algorithms achieve bounded regret. We also conduct simulations on the Movielens recommendations dataset to demonstrate the improvement of the proposed algorithms over existing structured bandit algorithms.
翻译:我们考虑到一个有限且结构化的匪徒问题,即不同武器的平均报酬是已知的共同隐藏参数($\theta ⁇ $$)的已知功能。由于我们没有对这些功能作出任何限制,问题设置将先前研究过的几个承担线性或不可逆奖赏功能的框架相加。我们提出一种新的办法,逐步估计隐藏的美元美元,并同时使用估计和平均奖赏功能来大幅度减少亚最佳武器勘探。这一办法使我们能够从根本上将任何典型的匪徒算法,包括UCB和Thompson抽样算法,推广到结构化的土匪环境。我们通过遗憾分析证明,我们提议的UCB-C算法(UCB的结构性匪帮化版本)只抽取了亚最佳武器($O(\log T)的一小部分,而其他亚最佳武器(称为非竞争性武器)被拉出1美元。因此,在所有亚最佳武器不具有竞争力的情况下,如果所有拟议的算法在许多实际假设中都可能发生,那么拟议的算法就会实现约束式的遗憾。我们还对结构化的算法进行了模拟,以便模拟地改进现有的制片动。