We design simple and optimal policies that ensure safety against heavy-tailed risk in the classical multi-armed bandit problem. Recently, \cite{fan2021fragility} showed that information-theoretically optimized bandit algorithms suffer from serious heavy-tailed risk; that is, the worst-case probability of incurring a linear regret slowly decays at a rate of $1/T$, where $T$ is the time horizon. Inspired by their results, we further show that widely used policies such as the standard Upper Confidence Bound policy and the Thompson Sampling policy also incur heavy-tailed risk; and this heavy-tailed risk actually exists for all "instance-dependent consistent" policies. To ensure safety against such heavy-tailed risk, for the two-armed bandit setting, we provide a simple policy design that (i) has the worst-case optimality for the expected regret at order $\tilde O(\sqrt{T})$ and (ii) has the worst-case tail probability of incurring a linear regret decay at an exponential rate $\exp(-\Omega(\sqrt{T}))$. We further prove that this exponential decaying rate of the tail probability is optimal across all policies that have worst-case optimality for the expected regret. Finally, we improve the policy design and analysis to the general setting with an arbitrary $K$ number of arms. We provide detailed characterization of the tail probability bound for any regret threshold under our policy design. Namely, the worst-case probability of incurring a regret larger than $x$ is upper bounded by $\exp(-\Omega(x/\sqrt{KT}))$. Numerical experiments are conducted to illustrate the theoretical findings. Our results reveal insights on the incompatibility between consistency and light-tailed risk, whereas indicate that worst-case optimality on expected regret and light-tailed risk are compatible.
翻译:我们设计了简单和最佳的政策,以确保安全,防止传统多武装匪徒问题中出现重创风险。 最近,\ cite{fan2021flegility} 显示,信息理论优化的土匪算法面临严重严重连锁风险; 也就是说, 引发线性遗憾的最坏的概率以1美元/T美元的速度缓慢衰减, 而美元为时平线。 受其结果的启发, 我们进一步显示, 广泛使用的政策, 如标准高信任度政策 和汤普森抽样政策 也带来严重连锁风险 ; 而对于所有“ 依赖系统稳定” 的政策来说,这种高度连锁的概率风险实际上都存在。 为了保证这种严重连锁风险的安全, 我们提供了一个简单的政策设计, (一) 最坏的情景最优化, O(sqrqrt{t} $) 美元和 (ii) 最坏的尾巴的概率最坏的概率是 以指数 美元/Omregread dreal dreal dreal) 。 我们最坏的策略的直为最坏的直为最坏的概率 。