Zeroth-order optimization (ZO) typically relies on two-point feedback to estimate the unknown gradient of the objective function. Nevertheless, two-point feedback can not be used for online optimization of time-varying objective functions, where only a single query of the function value is possible at each time step. In this work, we propose a new one-point feedback method for online optimization that estimates the objective function gradient using the residual between two feedback points at consecutive time instants. Moreover, we develop regret bounds for ZO with residual feedback for both convex and nonconvex online optimization problems. Specifically, for both deterministic and stochastic problems and for both Lipschitz and smooth objective functions, we show that using residual feedback can produce gradient estimates with much smaller variance compared to conventional one-point feedback methods. As a result, our regret bounds are much tighter compared to existing regret bounds for ZO with conventional one-point feedback, which suggests that ZO with residual feedback can better track the optimizer of online optimization problems. Additionally, our regret bounds rely on weaker assumptions than those used in conventional one-point feedback methods. Numerical experiments show that ZO with residual feedback significantly outperforms existing one-point feedback methods also in practice.
翻译:零点命令优化(ZO)通常依靠两点反馈来估计目标功能的未知梯度。 然而,两点反馈不能用于在网上优化时间变化目标功能, 每一个时间步骤只能对函数值进行单一查询。 在这项工作中, 我们提出一个新的一点反馈方法, 用于在线优化, 利用连续时两个反馈点之间的剩余点来估计目标函数梯度。 此外, 我们为ZO开发了遗憾界限, 并针对 convex 和非convex 在线优化问题提供剩余反馈。 具体地说, 对于确定性和随机问题, 以及利普西茨和平稳目标功能, 我们表明使用残余反馈可以产生梯度估计数, 与传统的一点反馈方法相比差异小得多。 结果, 我们的遗憾界限比传统一点反馈方法的零点偏差要小得多, 也显示现有一点反馈方法的零点反馈方法。