Lipschitz bandits is a prominent version of multi-armed bandits that studies large, structured action spaces such as the [0,1] interval, where similar actions are guaranteed to have similar rewards. A central theme here is the adaptive discretization of the action space, which gradually ``zooms in'' on the more promising regions thereof. The goal is to take advantage of ``nicer'' problem instances, while retaining near-optimal worst-case performance. While the stochastic version of the problem is well-understood, the general version with adversarial rewards is not. We provide the first algorithm for adaptive discretization in the adversarial version, and derive instance-dependent regret bounds. In particular, we recover the worst-case optimal regret bound for the adversarial version, and the instance-dependent regret bound for the stochastic version. Further, an application of our algorithm to dynamic pricing (where a seller repeatedly adjusts prices for a product) enjoys these regret bounds without any smoothness assumptions.
翻译:Lipschitz 土匪是多武装强盗的突出版本,他们研究的是大型、结构化的行动空间,如[0,1]间隔,保证类似行动得到类似的回报。这里的中心主题是行动空间的适应性分解,在较有希望的区域逐渐地“分化 ” 。目的是利用“nicer”的问题实例,同时保留近乎最佳的最坏的性能。虽然问题的结构化版本是广为人知的,但一般版本的对抗性奖赏却不是。我们为对抗性版本的适应性分解提供了第一种算法,并得出了依赖实例的遗憾界限。特别是,我们恢复了对抗性版本中最坏的情况的最佳遗憾,以及依审性版本的遗憾。此外,我们的算法适用于动态定价(即卖方反复调整产品价格)时,在不作任何简单假设的情况下享有这些遗憾界限。