We study the adversarial online learning problem and create a completely online algorithmic framework that has data dependent regret guarantees in both full expert feedback and bandit feedback settings. We study the expected performance of our algorithm against general comparators, which makes it applicable for a wide variety of problem scenarios. Our algorithm works from a universal prediction perspective and the performance measure used is the expected regret against arbitrary comparator sequences, which is the difference between our losses and a competing loss sequence. The competition class can be designed to include fixed arm selections, switching bandits, contextual bandits, periodic bandits or any other competition of interest. The sequences in the competition class are generally determined by the specific application at hand and should be designed accordingly. Our algorithm neither uses nor needs any preliminary information about the loss sequences and is completely online. Its performance bounds are data dependent, where any affine transform of the losses has no effect on the normalized regret.
翻译:我们研究对抗性在线学习问题,并创建一个完全在线算法框架,在专家反馈和土匪反馈的设置中,数据都依赖遗憾保证;我们研究我们算法对一般比较国的预期业绩,这种算法适用于各种各样的问题假设情况;我们算法从普遍预测的角度和所使用的业绩计量是预期对任意参照顺序的遗憾,即我们的损失和相竞损失顺序之间的差别;竞争等级的设计可以包括固定臂选择、交换强盗、背景强盗、周期性强盗或任何其他利益竞争。竞争等级的顺序一般由手头的具体应用程序决定,因此应当据此设计。我们的算法既不使用也不需要关于损失序列的任何初步信息,而是完全在线的。其性能界限取决于数据,而任何损失的切换都不会影响正常的遗憾。</s>