We study the survival bandit problem, a variant of the multi-armed bandit problem introduced in an open problem by Perotto et al. (2019), with a constraint on the cumulative reward; at each time step, the agent receives a (possibly negative) reward and if the cumulative reward becomes lower than a prespecified threshold, the procedure stops, and this phenomenon is called ruin. This is the first paper studying a framework where the ruin might occur but not always. We first discuss that a sublinear regret is unachievable under a naive definition of the regret. Next, we provide tight lower bounds on the probability of ruin (as well as matching policies). Based on this lower bound, we define the survival regret as an objective to minimize and provide a policy achieving a sublinear survival regret (at least in the case of integral rewards) when the time horizon $T$ is known.
翻译:我们研究了生存强盗问题,这是Perotto等人(2019年)在一个公开的问题中引入的多武装强盗问题的变体,对累积奖赏有一定的限制;在每次步骤上,代理人都得到一种(可能为负的)奖赏,如果累积奖赏低于预先规定的阈值,程序就停止了,而这种现象被称为废墟。这是研究废墟可能发生但并不总是发生的框架的第一份文件。我们首先讨论的是,在对遗憾的天真定义下,子线性遗憾是无法实现的。接下来,我们提供了破坏概率(以及匹配政策)的严格下限。基于这一较低约束,我们将生存遗憾定义为在知道时间范围$T时尽量减少和提供实现亚线性生存遗憾(至少在整体奖励的情况下)的政策的目标。