In this paper we provide provable regret guarantees for an online meta-learning receding horizon control algorithm in an iterative control setting, where in each iteration the system to be controlled is a linear deterministic system that is different and unknown, the cost for the controller in an iteration is a general additive cost function and the control input is required to be constrained, which if violated incurs an additional cost. We prove (i) that the algorithm achieves a regret for the controller cost and constraint violation that are $O(T^{3/4})$ for an episode of duration $T$ with respect to the best policy that satisfies the control input control constraints and (ii) that the average of the regret for the controller cost and constraint violation with respect to the same policy vary as $O((1+1/\sqrt{N})T^{3/4})$ with the number of iterations $N$, showing that the worst regret for the learning within an iteration continuously improves with experience of more iterations.
翻译:在本文中,我们为在迭代控制环境中的在线元学习后消退地平线控制算法提供了可证实的遗憾保证,在迭代控制环境中,所要控制的系统是一个不同和未知的线性确定系统,迭代控制器的费用是一个一般的添加成本功能,控制输入必须受到限制,如果被违反,将产生额外的费用。我们证明:(一) 该算法对控制器的费用和限制违反情况感到遗憾,这种违反是O(T ⁇ 3/4})$(T$),这种违反持续时间为美元(T$),它涉及满足控制输入控制限制限制的最佳政策;(二) 对控制器的费用和限制违反同一政策的平均遗憾程度与美元(1+1/sqrt{N}T ⁇ 3/4}($)不同,其数额为1+1/sqrt{N$(美元),表明在循环中学习最糟糕的遗憾随着发生更多的反复经历而不断改善。