Recently, several universal methods have been proposed for online convex optimization which can handle convex, strongly convex and exponentially concave cost functions simultaneously. However, most of these algorithms have been designed with static regret minimization in mind, but this notion of regret may not be suitable for changing environments. To address this shortcoming, we propose a novel and intuitive framework for universal online optimization in dynamic environments. Unlike existing universal algorithms, our strategy does not rely on the construction of a set of experts and an accompanying meta-algorithm. Instead, we show that the problem of dynamic online optimization can be reduced to a uniclass prediction problem. By leaving the choice of uniclass loss function in the user's hands, they are able to control and optimize dynamic regret bounds, which in turn carry over into the original problem. To the best of our knowledge, this is the first paper proposing a universal approach with state-of-the-art dynamic regret guarantees even for general convex cost functions.
翻译:最近,人们提议了一些通用的在线连接优化方法,这些方法可以同时处理连接、强烈连接和指数化连接成本功能。然而,这些算法大多是在静态遗憾最小化的思维中设计的,但这种遗憾的概念可能不适合不断变化的环境。为了解决这一缺陷,我们提议了一个在动态环境中普及在线优化的新颖和直观的框架。与现有的通用算法不同,我们的战略并不依赖于建立一套专家和伴随的元算法。相反,我们表明动态在线优化问题可以降低为单级预测问题。通过将单级损失函数的选择留给用户手中,他们能够控制和优化动态遗憾界限,而这反过来又会延续到最初的问题中。据我们所知,这是第一份文件,它提出了具有最新动态遗憾保证的普遍方法,即使对于一般的 convex成本功能也是如此。