We consider the problem of adversarial (non-stochastic) online learning with partial information feedback, where at each round, a decision maker selects an action from a finite set of alternatives. We develop a black-box approach for such problems where the learner observes as feedback only losses of a subset of the actions that includes the selected action. When losses of actions are non-negative, under the graph-based feedback model introduced by Mannor and Shamir, we offer algorithms that attain the so called "small-loss" $o(\alpha L^{\star})$ regret bounds with high probability, where $\alpha$ is the independence number of the graph, and $L^{\star}$ is the loss of the best action. Prior to our work, there was no data-dependent guarantee for general feedback graphs even for pseudo-regret (without dependence on the number of actions, i.e. utilizing the increased information feedback). Taking advantage of the black-box nature of our technique, we extend our results to many other applications such as semi-bandits (including routing in networks), contextual bandits (even with an infinite comparator class), as well as learning with slowly changing (shifting) comparators. In the special case of classical bandit and semi-bandit problems, we provide optimal small-loss, high-probability guarantees of $\tilde{O}(\sqrt{dL^{\star}})$ for actual regret, where $d$ is the number of actions, answering open questions of Neu. Previous bounds for bandits and semi-bandits were known only for pseudo-regret and only in expectation. We also offer an optimal $\tilde{O}(\sqrt{\kappa L^{\star}})$ regret guarantee for fixed feedback graphs with clique-partition number at most $\kappa$.
翻译:我们用部分信息反馈来考虑对抗性(非随机性)在线学习的问题, 在每个回合中, 决策者会从一组有限的替代品中选择一个动作。 我们为这样的问题开发了一个黑箱方法, 学习者认为只有包括选定动作在内的一组动作的反馈损失才是其中一部分行动的反馈损失。 在曼诺尔和沙米尔推出的基于图形的反馈模式下, 当行动损失是非负面的时, 我们提供算法, 达到所谓的“ 小额” $o( ALpha L ⁇ star} ), 在每回合中, 决策者会从一组有限的替代选择中选择一个动作。 我们开发的算法, 美元是图表的独立数字, $ 和 $ 最优的游戏行动损失。 在我们工作之前, 普通的反馈图表甚至没有数据依赖数据保证( 不依赖于行动的数量, 即使用更多的信息反馈 ) 利用我们技术的黑箱性质, 我们还将结果推广到许多其他应用程序, 如半黑盒( 包括网络中的road), 美元, 和 最硬的硬的硬的硬质 行动( 和最优的变的变的变的货币, 以我们学习的 的货币 的 的 的 的 的 的 的 的 的 的 等的 的 的 质的 质的 的 质的 的 的 的 质的 质的 质的 质的 质的 。