We consider the general (stochastic) contextual bandit problem under the realizability assumption, i.e., the expected reward, as a function of contexts and actions, belongs to a general function class $\mathcal{F}$. We design a fast and simple algorithm that achieves the statistically optimal regret with only ${O}(\log T)$ calls to an offline regression oracle across all $T$ rounds. The number of oracle calls can be further reduced to $O(\log\log T)$ if $T$ is known in advance. Our results provide the first universal and optimal reduction from contextual bandits to offline regression, solving an important open problem in the contextual bandit literature. A direct consequence of our results is that any advances in offline regression immediately translate to contextual bandits, statistically and computationally. This leads to faster algorithms and improved regret guarantees for broader classes of contextual bandit problems.
翻译:我们根据真实性假设来考虑一般(随机)背景土匪问题,即预期的奖赏,作为背景和行动的一个函数,属于一个普通功能类$\mathcal{F}$。我们设计了一个快速和简单的算法,在统计上实现最优的遗憾,只有${O}(log T)呼吁在所有T美元回合中进行离线回归或触角。如果事先知道$T,则奥克莱呼叫的数量可以进一步减少到$O(log\log T) 。我们的结果提供了从背景土匪到离线回归的第一个普遍和最佳的减幅,解决了背景土匪文献中一个重要的开放问题。我们结果的直接后果是,任何离线回归的进展都会立即转化为背景土匪、统计和计算。这导致更快的算法和为更广泛的背景土匪问题改进了遗憾保障。