We show that a kernel estimator using multiple function evaluations can be easily converted into a sampling-based bandit estimator with expectation equal to the original kernel estimate. Plugging such a bandit estimator into the standard FTRL algorithm yields a bandit convex optimisation algorithm that achieves $\tilde{O}(t^{1/2})$ regret against adversarial time-varying convex loss functions.
翻译:我们显示,使用多重功能评估的内核估测器可以很容易地转换成基于取样的土匪估测器,其预期值相当于最初的内核估计值。 将这样的土匪估测器插入标准的 FTRL 算法中可以产生一个土匪 convex 优化算法,该算法可以实现$\tilde{O}(t ⁇ 1/2})$(t ⁇ 1/2})对对抗时间变化的 convex 损失函数的遗憾。