We study the contextual continuum bandits problem, where the learner sequentially receives a side information vector and has to choose an action in a convex set, minimizing a function associated with the context. The goal is to minimize all the underlying functions for the received contexts, leading to the contextual notion of regret, which is stronger than the standard static regret. Assuming that the objective functions are $\gamma$-H\"older with respect to the contexts, $0<\gamma\le 1,$ we demonstrate that any algorithm achieving a sub-linear static regret can be extended to achieve a sub-linear contextual regret. We prove a static-to-contextual regret conversion theorem that provides an upper bound for the contextual regret of the output algorithm as a function of the static regret of the input algorithm. We further study the implications of this general result for three fundamental cases of dependency of the objective function on the action variable: (a) Lipschitz bandits, (b) convex bandits, (c) strongly convex and smooth bandits. For Lipschitz bandits and $\gamma=1,$ combining our results with the lower bound of Slivkins (2014), we prove that the minimax optimal contextual regret for the noise-free adversarial setting is achieved. Then, we prove that in the presence of noise, the contextual regret rate as a function of the number of queries is the same for convex bandits as it is for strongly convex and smooth bandits. Lastly, we present a minimax lower bound, implying two key facts. First, obtaining a sub-linear contextual regret may be impossible over functions that are not continuous with respect to the context. Second, for convex bandits and strongly convex and smooth bandits, the algorithms that we propose achieve, up to a logarithmic factor, the minimax optimal rate of contextual regret as a function of the number of queries.
翻译:我们研究了上下文连续赌博机问题,其中学习者顺序接收侧信息向量,并必须在凸集中选择一个动作,以最小化与上下文相关联的函数。目标是最小化所有接收上下文对应的底层函数,从而引出比标准静态遗憾更强的上下文遗憾概念。假设目标函数关于上下文是$\gamma$-Hölder连续的,其中$0<\gamma\le 1$,我们证明任何实现次线性静态遗憾的算法都可以扩展为实现次线性上下文遗憾。我们证明了一个静态到上下文遗憾的转换定理,该定理为输出算法的上下文遗憾提供了上界,该上界是输入算法静态遗憾的函数。我们进一步研究这一一般性结果对目标函数关于动作变量依赖性的三种基本情形的意义:(a) Lipschitz赌博机,(b) 凸赌博机,(c) 强凸且光滑赌博机。对于Lipschitz赌博机且$\gamma=1$的情况,结合我们的结果与Slivkins (2014)的下界,我们证明了无噪声对抗性设置下的极小极大最优上下文遗憾是可实现的。随后,我们证明在存在噪声的情况下,凸赌博机与强凸且光滑赌博机的上下文遗憾率作为查询次数的函数是相同的。最后,我们提出了一个极小极大下界,该下界蕴含两个关键事实:第一,对于不关于上下文连续的函数,可能无法获得次线性上下文遗憾;第二,对于凸赌博机以及强凸且光滑赌博机,我们所提出的算法在达到对数因子范围内,实现了作为查询次数函数的上下文遗憾的极小极大最优速率。