Recently, several universal methods have been proposed for online convex optimization, and attain minimax rates for multiple types of convex functions simultaneously. However, they need to design and optimize one surrogate loss for each type of functions, which makes it difficult to exploit the structure of the problem and utilize the vast amount of existing algorithms. In this paper, we propose a simple strategy for universal online convex optimization, which avoids these limitations. The key idea is to construct a set of experts to process the original online functions, and deploy a meta-algorithm over the \emph{linearized} losses to aggregate predictions from experts. Specifically, we choose Adapt-ML-Prod to track the best expert, because it has a second-order bound and can be used to leverage strong convexity and exponential concavity. In this way, we can plug in off-the-shelf online solvers as black-box experts to deliver problem-dependent regret bounds. Furthermore, our strategy inherits the theoretical guarantee of any expert designed for strongly convex functions and exponentially concave functions, up to a double logarithmic factor. For general convex functions, it maintains the minimax optimality and also achieves a small-loss bound.
翻译:最近,为在线 convex 优化提出了几种通用方法, 并同时实现多种 convex 功能的最小值。 但是, 它们需要设计并优化每种功能类型的一种替代损失, 这使得难以利用问题的结构并利用大量现有的算法。 在本文中, 我们提出了一种简单的在线 Civex 优化通用战略, 以避免这些限制。 关键的想法是构建一套专家来处理最初的在线功能, 并在 emph{linearized} 损失上部署一个元值, 用于专家的综合预测。 具体地说, 我们选择 适应- ML- Prod 来跟踪最佳专家, 因为它有第二顺序的约束, 并且可以用来利用强大的 convex 和 指数化的精度。 这样, 我们就可以将现成的在线解决方案作为黑箱专家来传递依赖问题的遗憾约束。 此外, 我们的战略继承了任何专家的理论保证, 用于强烈的 convex 功能和指数化的 concaveve 函数, 直至 双向线性最小化的组合函数。