The RKHS bandit problem (also called kernelized multi-armed bandit problem) is an online optimization problem of non-linear functions with noisy feedback. Although the problem has been extensively studied, there are unsatisfactory results for some problems compared to the well-studied linear bandit case. Specifically, there is no general algorithm for the adversarial RKHS bandit problem. In addition, high computational complexity of existing algorithms hinders practical application. We address these issues by considering a novel amalgamation of approximation theory and the misspecified linear bandit problem. Using an approximation method, we propose efficient algorithms for the stochastic RKHS bandit problem and the first general algorithm for the adversarial RKHS bandit problem. Furthermore, we empirically confirm one of our theoretical results, i.e., we demonstrate that our proposed method has comparable cumulative regret to IGP-UCB and its running time is much shorter.
翻译:RKHS 土匪问题(也称为内部多武装土匪问题) 是非线性功能的在线优化问题, 且有吵闹的反馈。 虽然这个问题已经进行了广泛研究, 但与研究周密的线性土匪案件相比,有些问题的结果并不令人满意。 具体地说, 敌对的RKHS 土匪问题没有通用算法。 此外, 现有算法的计算复杂性阻碍了实际应用。 我们通过考虑将近似理论和错误描述的线性土匪问题进行新颖的合并来解决这些问题。 我们使用近似法, 提出了关于Stochatic RKHS 土匪问题的有效算法, 以及敌性RKHS 土匪问题的第一个总算法。 此外, 我们从经验上确认了我们的理论结果之一, 即我们证明我们提出的方法与IG- UCB 相近的累积遗憾, 其运行时间要短得多。