We revisit the problem of online learning with sleeping experts/bandits: in each time step, only a subset of the actions are available for the algorithm to choose from (and learn about). The work of Kleinberg et al. (2010) showed that there exist no-regret algorithms which perform no worse than the best ranking of actions asymptotically. Unfortunately, achieving this regret bound appears computationally hard: Kanade and Steinke (2014) showed that achieving this no-regret performance is at least as hard as PAC-learning DNFs, a notoriously difficult problem. In the present work, we relax the original problem and study computationally efficient no-approximate-regret algorithms: such algorithms may exceed the optimal cost by a multiplicative constant in addition to the additive regret. We give an algorithm that provides a no-approximate-regret guarantee for the general sleeping expert/bandit problems. For several canonical special cases of the problem, we give algorithms with significantly better approximation ratios; these algorithms also illustrate different techniques for achieving no-approximate-regret guarantees.
翻译:我们重新审视了与睡眠专家/暴徒在线学习的问题:在每一个步骤中,只有一组行动可供算法选择(和学习),克莱伯格等人(2010年)的工作表明,不存在比行动的最佳等级差的无回报算法。 不幸的是,实现这一遗憾捆绑似乎在计算上困难重重:卡纳德和斯坦克(2014年)显示,实现这种无回报性表现至少像PAC学习DNF一样困难,这是一个臭名昭著的问题。在目前的工作中,我们放松最初的问题,研究计算效率高的无回报近似雷格特算法:这种算法可能超过最佳成本,除了累加遗憾外,还采用倍增常数。我们给出一种为一般睡眠专家/暴徒问题提供不近于回报的保证的算法。对于这一问题的几大典型特殊案例,我们给出了近似比率高得多的算法;这些算法还说明了实现不远近似风险保证的不同技术。