This paper investigates the problem of regret minimization in linear time-varying (LTV) dynamical systems. Due to the simultaneous presence of uncertainty and non-stationarity, designing online control algorithms for unknown LTV systems remains a challenging task. At a cost of NP-hard offline planning, prior works have introduced online convex optimization algorithms, although they suffer from nonparametric rate of regret. In this paper, we propose the first computationally tractable online algorithm with regret guarantees that avoids offline planning over the state linear feedback policies. Our algorithm is based on the optimism in the face of uncertainty (OFU) principle in which we optimistically select the best model in a high confidence region. Our algorithm is then more explorative when compared to previous approaches. To overcome non-stationarity, we propose either a restarting strategy (R-OFU) or a sliding window (SW-OFU) strategy. With proper configuration, our algorithm is attains sublinear regret $O(T^{2/3})$. These algorithms utilize data from the current phase for tracking variations on the system dynamics. We corroborate our theoretical findings with numerical experiments, which highlight the effectiveness of our methods. To the best of our knowledge, our study establishes the first model-based online algorithm with regret guarantees under LTV dynamical systems.
翻译:本文调查了线性时间变化( LTV) 动态系统中最遗憾最小化的问题。 由于同时存在不确定性和非静态,为未知 LTV 系统设计在线控制算法仍是一项艰巨的任务。 以NP- 硬离线规划为代价, 先前的工程引入了在线convex优化算法, 尽管它们有非参数性的遗憾率。 在本文中, 我们提议了第一个可计算可移动的在线算法, 并附有避免在州线性反馈政策上进行离线规划的遗憾保证。 我们的算法基于不确定性( OFU) 原则的乐观, 我们乐观地选择了一个高度信任区域中的最佳模式。 我们的算法随后比以往的方法更具有探索性。 为了克服非常性, 我们提议了重新启动战略( R- OFUFU) 或滑动窗口( SW-OFU) 战略。 我们的算法正在获得亚线性遗憾 $O ( T ⁇ 2/3} 。 这些算法利用当前阶段的数据来追踪系统动态变化。 我们的理论发现, 我们的理论结论性研究以我们的动态模型为基础, 建立我们的系统。