We present a unified analysis method that relies on the generalized cosine rule and $\phi$-convex for online optimization in normed vector space using dynamic regret as the performance metric. In combing the update rules, we start with strategy $S$ (a two-parameter variant strategy covering Optimistic-FTRL with surrogate linearized losses), and obtain $S$-I (type-I relaxation variant form of $S$) and $S$-II (type-II relaxation variant form of $S$, which is Optimistic-MD) by relaxation. Regret bounds for $S$-I and $S$-II are the tightest possible. As instantiations, regret bounds of normalized exponentiated subgradient and greedy/lazy projection are better than the currently known optimal results. By replacing losses of online game with monotone operators, and extending the definition of regret, namely regret$^n$, we extend online convex optimization to online monotone optimization, which expands the application scope of $S$-I and $S$-II.
翻译:我们提出统一分析方法,在规范矢量空间中,以通用顺差规则和$-phe$-convex为基础,利用动态遗憾作为性能衡量标准,在线优化规范矢量空间。在对更新规则进行梳理时,我们首先采用战略$S(一个包含最佳偏差-FTRL的双参数变异战略,以替代线性损失),并获得美元-一(一型放松变差形式,以S美元为美元)和美元-二(二型放松变差形式,以S美元为美元,即乐观-MD美元),通过放松,获得统一分析方法。Regret $S-I 和 $S-II 的界限是最紧凑的。由于即时速,正常的递减子偏差和贪婪/疲乏预测比目前已知的最佳结果要好。我们用单体操作者取代网上游戏的损失,并扩大遗憾的定义,即遗憾美元,即将网上convex优化扩大到在线单体优化,从而扩大了美元-S-I和$S-II的应用范围。