This paper studies a new variant of the stochastic multi-armed bandits problem, where the learner has access to auxiliary information about the arms. The auxiliary information is correlated with the arm rewards, which we treat as control variates. In many applications, the arm rewards are a function of some exogenous values, whose mean value is known a priori from historical data and hence can be used as control variates. We use the control variates to obtain mean estimates with smaller variance and tighter confidence bounds. We then develop an algorithm named UCB-CV that uses improved estimates. We characterize the regret bounds in terms of the correlation between the rewards and control variates. The experiments on synthetic data validate the performance guarantees of our proposed algorithm.
翻译:本文研究的是一种新变体,即随机多武装匪徒问题,学习者可以获得有关军火的辅助信息。辅助信息与我们视为控制变异的手臂报酬相关,在许多应用中,手臂报酬是某些外来价值的函数,其平均价值从历史数据中先验地得知,因此可以用作控制变体。我们使用控制变体获得平均估计数,差异较小,信任界限更紧。然后我们开发了一个名为UCB-CV的算法,使用改进的估计数。我们用奖励与控制变异的相互关系来描述遗憾界限。合成数据的实验证实了我们提议的算法的性能保障。