This paper presents the Relaxed Continuous-Time Actor-critic (RCTAC) algorithm, a method for finding the nearly optimal policy for nonlinear continuous-time (CT) systems with known dynamics and infinite horizon, such as the path-tracking control of vehicles. RCTAC has several advantages over existing adaptive dynamic programming algorithms for CT systems. It does not require the ``admissibility" of the initialized policy or the input-affine nature of controlled systems for convergence. Instead, given any initial policy, RCTAC can converge to an admissible, and subsequently nearly optimal policy for a general nonlinear system with a saturated controller. RCTAC consists of two phases: a warm-up phase and a generalized policy iteration phase. The warm-up phase minimizes the square of the Hamiltonian to achieve admissibility, while the generalized policy iteration phase relaxes the update termination conditions for faster convergence. The convergence and optimality of the algorithm are proven through Lyapunov analysis, and its effectiveness is demonstrated through simulations and real-world path-tracking tasks.
翻译:本文提出了松弛连续时间演员-评论家 (RCTAC) 算法,用于寻找非线性连续时间系统的近似最优策略,并具有无穷时域,如车辆路径跟踪控制等。 RCTAC 具有比现有自适应动态程序设计算法更多的优势。它不需要控制系统输入系数仿射特性或初始化策略的“允许性”以实现收敛。相反,对于任何初始策略,RCTAC 都可以收敛于一般非线性系统的允许策略,随后趋近于最优策略,并通过 Lyapunov 分析证明了算法的收敛性和最优性。RCTAC 包括两个阶段:预热阶段和广义策略迭代阶段。预热阶段最小化 Hamiltonian 的平方以实现允许性,而广义策略迭代阶段则放宽了更新停止条件以加快收敛速度。通过模拟和现实世界路径跟踪任务证明了算法的有效性。