We consider a contextual bandit problem with a combinatorial action set and time-varying base arm availability. At the beginning of each round, the agent observes the set of available base arms and their contexts and then selects an action that is a feasible subset of the set of available base arms to maximize its cumulative reward in the long run. We assume that the mean outcomes of base arms are samples from a Gaussian Process indexed by the context set ${\cal X}$, and the expected reward is Lipschitz continuous in expected base arm outcomes. For this setup, we propose an algorithm called Optimistic Combinatorial Learning and Optimization with Kernel Upper Confidence Bounds (O'CLOK-UCB) and prove that it incurs $\tilde{O}(K\sqrt{T\overline{\gamma}_{T}} )$ regret with high probability, where $\overline{\gamma}_{T}$ is the maximum information gain associated with the set of base arm contexts that appeared in the first $T$ rounds and $K$ is the maximum cardinality of any feasible action over all rounds. To dramatically speed up the algorithm, we also propose a variant of O'CLOK-UCB that uses sparse GPs. Finally, we experimentally show that both algorithms exploit inter-base arm outcome correlation and vastly outperform the previous state-of-the-art UCB-based algorithms in realistic setups.
翻译:在每回合开始时,代理商观察一套可用的基础武器及其背景,然后选择一套可行的基础武器子集,以便长期最大限度地增加其累积报酬。我们假设,基础武器的平均结果是从一个高斯进程的样本中得出,其上下文是设定的组合动作和时间变化基础武器供应情况。我们假设,基础武器的平均结果是根据设定的_美元(cal X})的上下文索引,而预期的奖励是利普施茨在预期基础武器成果中的持续使用。对于这一设置,我们建议使用一种名为“最佳组合学习”和“最佳组合学习”的算法,在Kernel Up Inflicity Bounds(O'CLOK-UCB)中,然后选择一种可行的一组行动,作为长期最大的一部分,证明它产生$(tilde{O}(K\qrt{T\overline_gama_}}}美元(obrbr)的遗憾,其中$(overline_gama_T}$(美元)是基础武器背景环境背景环境中出现的最大信息增益。对于头一轮的计算结果,我们也提出一个巨大的实验性分析。