The stochastic contextual bandit problem, which models the trade-off between exploration and exploitation, has many real applications, including recommender systems, online advertising and clinical trials. As many other machine learning algorithms, contextual bandit algorithms often have one or more hyper-parameters. As an example, in most optimal stochastic contextual bandit algorithms, there is an unknown exploration parameter which controls the trade-off between exploration and exploitation. A proper choice of the hyper-parameters is essential for contextual bandit algorithms to perform well. However, it is infeasible to use offline tuning methods to select hyper-parameters in contextual bandit environment since there is no pre-collected dataset and the decisions have to be made in real time. To tackle this problem, we first propose a two-layer bandit structure for auto tuning the exploration parameter and further generalize it to the Syndicated Bandits framework which can learn multiple hyper-parameters dynamically in contextual bandit environment. We derive the regret bounds of our proposed Syndicated Bandits framework and show it can avoid its regret dependent exponentially in the number of hyper-parameters to be tuned. Moreover, it achieves optimal regret bounds under certain scenarios. Syndicated Bandits framework is general enough to handle the tuning tasks in many popular contextual bandit algorithms, such as LinUCB, LinTS, UCB-GLM, etc. Experiments on both synthetic and real datasets validate the effectiveness of our proposed framework.
翻译:模拟勘探与开发之间的交易的随机背景土匪问题有许多真正的应用,包括建议系统、在线广告和临床试验。由于许多其他机器学习算法,背景土匪算法往往有一个或多个超参数。举例来说,在大多数最佳随机背景土匪算法中,有一个未知的勘探参数来控制勘探与开发之间的交易。适当选择超参数对于背景土匪算法的运行良好至关重要。然而,由于没有预先收集的数据集,因此无法使用离线调法在背景土匪环境中选择超参数。背景土匪算法往往具有一个或多个超参数。为了解决这一问题,我们首先提出一个双层的频段结构,用于自动调整勘探参数,并进一步将其推广到同步的土匪框架,可以在背景土匪环境中动态学习多个超参数。我们提议的双向型土匪调法框架的遗憾度,并显示它能够避免在背景土匪结构框架下进行遗憾的实测算,在总体逻辑框架下实现一定比例的硬度调整。