In this work we address the problem of performing a repetitive task when we have uncertain observations and dynamics. We formulate this problem as an iterative infinite horizon optimal control problem with output feedback. Previously, this problem was solved for linear time-invariant (LTI) system for the case when noisy full-state measurements are available using a robust iterative learning control framework, which we refer to as robust learning-based model predictive control (RL-MPC). However, this work does not apply to the case when only noisy observations of part of the state are available. This limits the applicability of current approaches in practice: First, in practical applications we typically do not have access to the full state. Second, uncertainties in the observations, when not accounted for, can lead to instability and constraint violations. To overcome these limitations, we propose a combination of RL-MPC with robust output feedback model predictive control, named robust learning-based output feedback model predictive control (RLO-MPC). We show recursive feasibility and stability, and prove theoretical guarantees on the performance over iterations. We validate the proposed approach with a numerical example in simulation and a quadrotor stabilization task in experiments.
翻译:在这项工作中,我们处理的是当我们有不确定的观测和动态时执行重复任务的问题。我们把这个问题作为输出反馈的迭代无限最佳控制问题来阐述。以前,这个问题在使用一个强大的反复反复学习控制框架(我们称之为强力的基于学习的模型预测控制)的情况下已经解决。然而,当我们只对部分国家进行烦琐的观测时,这项工作并不适用于这种情况。这限制了目前方法的实际适用性:首先,在实际应用中,我们通常无法进入整个状态。第二,在不说明原因的情况下,观察中的不确定性可能导致不稳定和制约性违反。为了克服这些限制,我们提议将RL-MPC与强力产出反馈模型预测控制(我们称之为强力的基于学习的产出反馈模型预测控制(RLO-MPC)相结合。我们展示了反复的可行性和稳定性,并证明对迭代性性表现的理论保证。我们用模拟中的数字示例和实验中的二次分析稳定任务来验证了拟议的方法。