Stochastic linear contextual bandit algorithms have substantial applications in practice, such as recommender systems, online advertising, clinical trials, etc. Recent works show that optimal bandit algorithms are vulnerable to adversarial attacks and can fail completely in the presence of attacks. Existing robust bandit algorithms only work for the non-contextual setting under the attack of rewards and cannot improve the robustness in the general and popular contextual bandit environment. In addition, none of the existing methods can defend against attacked context. In this work, we provide the first robust bandit algorithm for stochastic linear contextual bandit setting under a fully adaptive and omniscient attack with sub-linear regret. Our algorithm not only works under the attack of rewards, but also under attacked context. Moreover, it does not need any information about the attack budget or the particular form of the attack. We provide theoretical guarantees for our proposed algorithm and show by experiments that our proposed algorithm improves the robustness against various kinds of popular attacks.
翻译:近期的工程显示,最佳的土匪算法很容易受到对抗性攻击,而且在发生攻击时可能完全失败。 现有的强势土匪算法只为非理论性环境在奖励攻击下发挥作用,不能提高一般和流行背景土匪环境中的稳健性。 此外,现有的方法都无法抵御受到攻击的背景。在这项工作中,我们以亚线性遗憾为完全适应性和全景性攻击下的随机性线性线性背景土匪设置提供了第一种强健的土匪算法。我们的算法不仅在奖励攻击下起作用,而且在受到攻击的背景下也起作用。此外,它不需要关于攻击预算或攻击特定形式的任何信息。我们为拟议的算法提供理论保证,并通过实验表明我们提议的算法提高了对付各种民众攻击的稳健性。