We study the problem of Online Convex Optimization (OCO) with memory, which allows loss functions to depend on past decisions and thus captures temporal effects of learning problems. In this paper, we introduce dynamic policy regret as the performance measure to design algorithms robust to non-stationary environments, which competes algorithms' decisions with a sequence of changing comparators. We propose a novel algorithm for OCO with memory that provably enjoys an optimal dynamic policy regret. The key technical challenge is how to control the switching cost, the cumulative movements of player's decisions, which is neatly addressed by a novel decomposition of dynamic policy regret and an appropriate meta-expert structure. Furthermore, we generalize the results to the problem of online non-stochastic control, i.e., controlling a linear dynamical system with adversarial disturbance and convex loss functions. We derive a novel gradient-based controller with dynamic policy regret guarantees, which is the first controller competitive to a sequence of changing policies.
翻译:我们用记忆来研究在线 convex 优化(OCO)的问题,让损失功能依赖过去的决定,从而捕捉学习问题的时间影响。在本文中,我们引入了动态政策遗憾作为性能衡量标准,以设计对非静止环境强健的算法,这种算法与一系列变化的参照方相竞争。我们为OCO提出了一个具有记忆的新式算法,这种算法可以令人理解地享有最佳的动态政策悔恨。关键的技术挑战是如何控制转换成本、玩家决定的累积移动,这种变化通过动态政策悔恨和适当的元专家结构的新颖的分解得到了很好的解决。此外,我们把结果概括化为在线非静态控制的问题,即控制线性动态系统,带有对抗性扰动和convex损失功能。我们产生了一个新的具有动态政策悔恨保证的梯度控制器,这是第一个对改变政策的顺序具有竞争力的控制器。