Optimal control (OC) is an effective approach to controlling complex dynamical systems. However, typical approaches to parameterising and learning controllers in optimal control have been ad-hoc, collecting data and fitting it to neural networks. This two-step approach can overlook crucial constraints such as optimality and time-varying conditions. We introduce a unified, function-first framework that simultaneously learns Lyapunov or value functions while implicitly solving OC problems. We propose two mathematical programs based on the Hamilton-Jacobi-Bellman (HJB) constraint and its relaxation to learn time varying value and Lyapunov functions. We show the effectiveness of our approach on linear and nonlinear control-affine problems. The proposed methods are able to generate near optimal trajectories and guarantee Lyapunov condition over a compact set of initial conditions. Furthermore We compare our methods to Soft Actor Critic (SAC) and Proximal Policy Optimisation (PPO). In this comparison, we never underperform in task cost and, in the best cases, outperform SAC and PPO by a factor of 73 and 22, respectively.
翻译:暂无翻译