We consider a finite-horizon multi-armed bandit (MAB) problem in a Bayesian setting, for which we propose an information relaxation sampling framework. With this framework, we define an intuitive family of control policies that include Thompson sampling (TS) and the Bayesian optimal policy as endpoints. Analogous to TS, which, at each decision epoch pulls an arm that is best with respect to the randomly sampled parameters, our algorithms sample entire future reward realizations and take the corresponding best action. However, this is done in the presence of "penalties" that seek to compensate for the availability of future information. We develop several novel policies and performance bounds for MAB problems that vary in terms of improving performance and increasing computational complexity between the two endpoints. Our policies can be viewed as natural generalizations of TS that simultaneously incorporate knowledge of the time horizon and explicitly consider the exploration-exploitation trade-off. We prove associated structural results on performance bounds and suboptimality gaps. Numerical experiments suggest that this new class of policies perform well, in particular in settings where the finite time horizon introduces significant exploration-exploitation tension into the problem. Finally, inspired by the finite-horizon Gittins index, we propose an index policy that builds on our framework that particularly outperforms the state-of-the-art algorithms in our numerical experiments.
翻译:我们认为,在巴伊西亚环境下,我们考虑的是一种限值和多武装土匪(MAB)问题,为此,我们提议了一个信息放松抽样框架。在这个框架内,我们定义了一套直观的控制政策,包括Thompson抽样(TS)和巴伊西亚最佳政策,作为终点。与TS相比,我们的政策可以被视为TS的自然概括,它既包含对时间范围的知识,又明确考虑探索-开发交易。我们在业绩约束和次优化差距上证明了相关的结构结果。数字实验表明,这一新政策类别运行良好,特别是在我们不断变异的轨迹上,我们最终在探索时平面上展示了我们不确定的时间框架。