Gaussian processes (GP) are a widely-adopted tool used to sequentially optimize black-box functions, where evaluations are costly and potentially noisy. Recent works on GP bandits have proposed to move beyond random noise and devise algorithms robust to adversarial attacks. This paper studies this problem from the attacker's perspective, proposing various adversarial attack methods with differing assumptions on the attacker's strength and prior information. Our goal is to understand adversarial attacks on GP bandits from theoretical and practical perspectives. We focus primarily on targeted attacks on the popular GP-UCB algorithm and a related elimination-based algorithm, based on adversarially perturbing the function $f$ to produce another function $\tilde{f}$ whose optima are in some target region $\mathcal{R}_{\rm target}$. Based on our theoretical analysis, we devise both white-box attacks (known $f$) and black-box attacks (unknown $f$), with the former including a Subtraction attack and Clipping attack, and the latter including an Aggressive subtraction attack. We demonstrate that adversarial attacks on GP bandits can succeed in forcing the algorithm towards $\mathcal{R}_{\rm target}$ even with a low attack budget, and we test our attacks' effectiveness on a diverse range of objective functions.
翻译:高斯进程( GP) 是一个广泛采用的工具, 用于按顺序优化黑盒功能, 评估成本高昂且可能非常吵闹。 最近关于GP土匪的工程提议超越随机噪音, 设计强力的算法, 以对抗性攻击。 本文从攻击者的角度研究这个问题, 提出各种对抗性攻击方法, 对攻击者的力量和先前的信息有不同的假设。 我们的目标是从理论和实践角度理解对GP土匪的对抗性攻击。 我们主要侧重于对流行的GP- UCB算法和相关消除性算法的定向攻击, 其基础是对抗性地透视功能$f, 以产生另一个功能$\ tilde{fnf} 。 本文从攻击者的角度研究这个问题, 提出对攻击者实力和先前信息的不同假设。 我们设计了白箱攻击( 已知美元) 和黑箱攻击( 未知美元), 前者包括减力攻击和Clipping攻击, 后者包括递增力攻击。 我们展示了对目标性攻击的对抗性攻击, 甚至对低程程攻击的进攻, 我们测试了对目标攻击, 目标攻击, 我们用低射程攻击, 测试了对目标攻击。