To solve the coupling problem of control loops and the adaptive parameter tuning problem in the multi-input multi-output (MIMO) PID control system, a self-adaptive LSAC-PID algorithm is proposed based on deep reinforcement learning (RL) and Lyapunov-based reward shaping in this paper. For complex and unknown mobile robot control environment, an RL-based MIMO PID hybrid control strategy is firstly presented. According to the dynamic information and environmental feedback of the mobile robot, the RL agent can output the optimal MIMO PID parameters in real time, without knowing mathematical model and decoupling multiple control loops. Then, to improve the convergence speed of RL and the stability of mobile robots, a Lyapunov-based reward shaping soft actor-critic (LSAC) algorithm is proposed based on Lyapunov theory and potential-based reward shaping method. The convergence and optimality of the algorithm are proved in terms of the policy evaluation and improvement step of soft policy iteration. In addition, for line-following robots, the region growing method is improved to adapt to the influence of forks and environmental interference. Through comparison, test and cross-validation, the simulation and real-environment experimental results all show good performance of the proposed LSAC-PID tuning algorithm.
翻译:为解决多投入多输出(MIMO)PID控制系统中控制循环和适应参数调控问题的混合问题,本文件根据深强化学习(RL)和基于Lyapunov的奖励制成,提出了自我调整的LSAC-PID算法。对于复杂和未知的移动机器人控制环境,首先提出了基于RL的MSIMPID混合控制战略。根据移动机器人的动态信息和环境反馈,RL代理商可以实时输出最佳的MIMO PID参数,而不熟悉数学模型和分离多个控制圈。然后,为了提高RL的趋同速度和移动机器人的稳定性,提出了基于Lyapunov的奖励制成软动作-crict(LSAC)的计算法。根据流动机器人的动态信息和环境反馈,RLIMOP混合控制战略的趋同和优化体现在政策评价和改进软政策转换步骤上。此外,对于跟踪线机器人而言,该区域正在不断增长的方法正在改进,以适应实际环境的模拟和测试结果。