We investigate online convex optimization in non-stationary environments and choose the \emph{dynamic regret} as the performance measure, defined as the difference between cumulative loss incurred by the online algorithm and that of any feasible comparator sequence. Let $T$ be the time horizon and $P_T$ be the path-length that essentially reflects the non-stationarity of environments, the state-of-the-art dynamic regret is $\mathcal{O}(\sqrt{T(1+P_T)})$. Although this bound is proved to be minimax optimal for convex functions, in this paper, we demonstrate that it is possible to further enhance the guarantee for some easy problem instances, particularly when online functions are smooth. Specifically, we propose novel online algorithms that can leverage smoothness and replace the dependence on $T$ in the dynamic regret by \emph{problem-dependent} quantities: the variation in gradients of loss functions, the cumulative loss of the comparator sequence, and the minimum of the previous two terms. These quantities are at most $\mathcal{O}(T)$ while could be much smaller in benign environments. Therefore, our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and meanwhile guarantee the same rate in the worst case. Notably, our algorithm requires only \emph{one} gradient per iteration, which shares the same gradient query complexity with the methods developed for optimizing the static regret. As a further application, we extend the results from the full-information setting to bandit convex optimization with two-point feedback and thereby attain the first problem-dependent dynamic regret for such bandit tasks.
翻译:我们调查非静止环境中的在线 convex 优化, 并选择 emph{ 动态错误} 作为业绩计量, 定义为线上算法和任何可行的参照序列的累积损失之间的差别。 我们调查非静止环境中的在线 convex 优化, 并选择 emph{ problem} 作为业绩计量 。 虽然这一约束被证明是对于 convex 函数而言最理想的缩微的, 但我们证明有可能进一步加强对某些容易出现的问题案例的保障, 特别是当线上函数平滑时。 具体地说, 我们提出新的在线算法, 能够利用平滑, 取代在动态遗憾中对 $T 的依赖 $ mathcall{ { O} (sqrt{T+P_T} ) 。 损失函数的梯度变化, 最坏的顺序损失, 以及前两个条件的最小的值。 这些数量是最多 $macalalexal {O} 显示, 进一步增强某些容易发生问题, 直径直值 和直径直径直值 。