We investigate a nonstochastic bandit setting in which the loss of an action is not immediately charged to the player, but rather spread over the subsequent rounds in an adversarial way. The instantaneous loss observed by the player at the end of each round is then a sum of many loss components of previously played actions. This setting encompasses as a special case the easier task of bandits with delayed feedback, a well-studied framework where the player observes the delayed losses individually. Our first contribution is a general reduction transforming a standard bandit algorithm into one that can operate in the harder setting: We bound the regret of the transformed algorithm in terms of the stability and regret of the original algorithm. Then, we show that the transformation of a suitably tuned FTRL with Tsallis entropy has a regret of order $\sqrt{(d+1)KT}$, where $d$ is the maximum delay, $K$ is the number of arms, and $T$ is the time horizon. Finally, we show that our results cannot be improved in general by exhibiting a matching (up to a log factor) lower bound on the regret of any algorithm operating in this setting.
翻译:我们调查了一个非随机的匪徒环境,在这个环境里,行动的损失不是立即向玩家收取,而是以对抗的方式分散在随后的回合中。 玩家在每轮结束时观察到的瞬间损失是先前所玩动作的许多损失组成部分的总和。 这个环境作为一个特殊案例包括了有延迟反馈的匪徒较容易完成的任务, 这是一个经过仔细研究的框架, 玩家可以单独观察延迟的损失。 我们的第一个贡献是将标准土匪算法转换成一个在较困难的环境中可以操作的算法: 我们用原始算法的稳定性和遗憾来约束已经改变的算法的遗憾。 然后, 我们显示, 将一个经过适当调整的 FTRL 与 Tsalllis entropy 转换成一个价格为 $sqrt{( d+1)KT} 的遗憾, 美元是最大的延迟, 美元是武器的数量, 美元是时间跨度。 我们表明, 通过展示一个比值( 到一个记录系数) 无法普遍改善我们的结果。