In this paper, we present an improved analysis for dynamic regret of strongly convex and smooth functions. Specifically, we investigate the Online Multiple Gradient Descent (OMGD) algorithm proposed by Zhang et al. (2017). The original analysis shows that the dynamic regret of OMGD is at most $\mathcal{O}(\min\{\mathcal{P}_T,\mathcal{S}_T\})$, where $\mathcal{P}_T$ and $\mathcal{S}_T$ are path-length and squared path-length that measures the cumulative movement of minimizers of the online functions. We demonstrate that by an improved analysis, the dynamic regret of OMGD can be improved to $\mathcal{O}(\min\{\mathcal{P}_T,\mathcal{S}_T,\mathcal{V}_T\})$, where $\mathcal{V}_T$ is the function variation of the online functions. Note that the quantities of $\mathcal{P}_T, \mathcal{S}_T, \mathcal{V}_T$ essentially reflect different aspects of environmental non-stationarity -- they are not comparable in general and are favored in different scenarios. Therefore, the dynamic regret presented in this paper actually achieves a \emph{best-of-three-worlds} guarantee and is strictly tighter than previous results.
翻译:在本文中, 我们对强烈的 convex 和光滑的功能的动态遗憾进行更好的分析。 具体地说, 我们调查张等人( 2017年) 提议的在线多重梯度( OMGD) 算法( 2017年) 。 原始分析显示, OMGD 的动态遗憾最多在 $\ mathcal{ P ⁇ T,\ mathcal{ {S\\\\\\\\\\\\} O} (min\\\\ mathcal} ( mathcal{P},\ mathcal{S\\\\\\\\\\\\\\\} O} (min\\ mathcal} (P\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ $美元, 其中$mathl rofrofro) com com com fal con concreal prilal- pril), exprecal- pal- pal- cal- sal- ial- sal- salismleval_ ial_ iversal) 。 iversal_ iversal- cal_ 。 。 。 Q_\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\