Gaussian processes (GP) are a widely-adopted tool used to sequentially optimize black-box functions, where evaluations are costly and potentially noisy. Recent works on GP bandits have proposed to move beyond random noise and devise algorithms robust to adversarial attacks. In this paper, we study this problem from the attacker's perspective, proposing various adversarial attack methods with differing assumptions on the attacker's strength and prior information. Our goal is to understand adversarial attacks on GP bandits from both a theoretical and practical perspective. We focus primarily on targeted attacks on the popular GP-UCB algorithm and a related elimination-based algorithm, based on adversarially perturbing the function $f$ to produce another function $\tilde{f}$ whose optima are in some region $\mathcal{R}_{\rm target}$. Based on our theoretical analysis, we devise both white-box attacks (known $f$) and black-box attacks (unknown $f$), with the former including a Subtraction attack and Clipping attack, and the latter including an Aggressive subtraction attack. We demonstrate that adversarial attacks on GP bandits can succeed in forcing the algorithm towards $\mathcal{R}_{\rm target}$ even with a low attack budget, and we compare our attacks' performance and efficiency on several real and synthetic functions.
翻译:Gausian 过程(GP) 是一个广泛采用的工具, 用于按顺序优化黑盒功能, 评估费用高昂且可能非常吵闹。 GP土匪最近的工作提议, 超越随机噪音, 设计强力的算法, 以对抗性攻击。 在本文中, 我们从攻击者的角度研究这个问题, 提出各种对抗性攻击方法, 对攻击者的力量和先前的信息有不同的假设。 我们的目标是从理论和实践角度理解对GP土匪的对抗性攻击。 我们主要关注对流行的GP- CUB算法和相关消除性算法的定向攻击, 其基础是对抗性地透视功能, 以美元来生成另一个功能 $\ tdelde{f} 。 我们根据我们的理论分析, 设计了白箱攻击( 以美元为名) 和黑箱攻击( 以美元为名) 。 我们主要关注对流行的GPGP- Clipping攻击, 和以美元为主的消除性攻击, 后则以反动性反动性反向实际攻击。 我们用数的反动性攻击性攻击性攻击性攻击性攻击性攻击性攻击性攻击。 我们证明性攻击性攻击性攻击性攻击性攻击性攻击性攻击性攻击性攻击性攻击性攻击性攻击性能。