We consider the problem of nonlinear stochastic optimal control. This problem is thought to be fundamentally intractable owing to Bellman's infamous "curse of dimensionality". We present a result that shows that repeatedly solving an open-loop deterministic problem from the current state, similar to Model Predictive Control (MPC), results in a feedback policy that is $O(\epsilon^4)$ near to the true global stochastic optimal policy. Furthermore, empirical results show that solving the Stochastic Dynamic Programming (DP) problem is highly susceptible to noise, even when tractable, and in practice, the MPC-type feedback law offers superior performance even for stochastic systems.
翻译:我们考虑的是非线性随机最佳控制的问题。 人们认为,由于Bellman的臭名昭著的“ 维度诅咒”,这个问题根本难以解决。 我们提出的结果表明,反复解决当前状态的开放环的确定性问题,类似于模型预测控制(MPC ), 导致一种接近真正的全球随机最佳政策的反馈政策($O ( epsilon’4) $ ) 。 此外,实证结果显示,解决斯托克动态程序(DP)问题非常容易受到噪音的影响,即便在可移动的情况下,实际上,MPC型反馈法甚至为随机系统提供了优异的性能。