Model predictive control (MPC) is a powerful trajectory optimization control technique capable of controlling complex nonlinear systems while respecting system constraints and ensuring safe operation. The MPC's capabilities come at the cost of a high online computational complexity, the requirement of an accurate model of the system dynamics, and the necessity of tuning its parameters to the specific control application. The main tunable parameter affecting the computational complexity is the prediction horizon length, controlling how far into the future the MPC predicts the system response and thus evaluates the optimality of its computed trajectory. A longer horizon generally increases the control performance, but requires an increasingly powerful computing platform, excluding certain control applications.The performance sensitivity to the prediction horizon length varies over the state space, and this motivated the adaptive horizon model predictive control (AHMPC), which adapts the prediction horizon according to some criteria. In this paper we propose to learn the optimal prediction horizon as a function of the state using reinforcement learning (RL). We show how the RL learning problem can be formulated and test our method on two control tasks, showing clear improvements over the fixed horizon MPC scheme, while requiring only minutes of learning.
翻译:模型预测控制(MPC)是一种强大的轨迹优化控制技术,能够控制复杂的非线性系统,同时尊重系统限制和确保安全运行。MPC的能力是以高在线计算复杂性、系统动态准确模型的要求和根据具体控制应用程序调整参数的必要性等代价的。影响计算复杂性的主要金枪鱼可参数是预测地平线长度,控制到未来多远的MPC预测系统反应,从而评估其计算轨迹的最佳性。一个较长的视野一般会增加控制性能,但需要一个越来越强大的计算平台,排除某些控制应用程序。对预测地平线长度的性能敏感度因国家空间不同而不同,这推动了适应性地平线模型预测控制(AHMPC),根据某些标准调整了预测地平线。在本文中,我们提议利用强化学习(RL)来学习最佳预测地平线作为国家的一个函数。我们展示了RL学习问题是如何在两种控制任务上形成并测试我们的方法,显示固定地平线MPC系统的明显改进,同时只需要学习几分钟。