We propose a simple, practical and intuitive approach to improve the performance of a conventional controller in uncertain environments using deep reinforcement learning while maintaining safe operation. Our approach is motivated by the observation that conventional controllers in industrial motion control value robustness over adaptivity to deal with different operating conditions and are suboptimal as a consequence. Reinforcement learning on the other hand can optimize a control signal directly from input-output data and thus adapt to operational conditions, but lacks safety guarantees, impeding its use in industrial environments. To realize adaptive control using reinforcement learning in such conditions, we follow a residual learning methodology, where a reinforcement learning algorithm learns corrective adaptations to a base controller's output to increase optimality. We investigate how constraining the residual agent's actions enables to leverage the base controller's robustness to guarantee safe operation. We detail the algorithmic design and propose to constrain the residual actions relative to the base controller to increase the method's robustness. Building on Lyapunov stability theory, we prove stability for a broad class of mechatronic closed-loop systems. We validate our method experimentally on a slider-crank setup and investigate how the constraints affect the safety during learning and optimality after convergence.
翻译:我们建议一种简单、实际和直观的方法,利用强化学习来提高常规控制者在不稳定环境中的性能,同时保持安全操作。我们的方法的动力在于观察到工业运动常规控制者对适应性进行控制,以适应不同的操作条件,因此不够优化。另一方面,强化学习可以直接从输入-产出数据中优化控制信号,从而适应操作条件,但缺乏安全保障,从而妨碍其在工业环境中的使用。为了在这种条件下利用强化学习实现适应性控制,我们遵循一种剩余学习方法,强化学习算法学习对基础控制者产出的纠正性调整,以提高最佳性。我们调查限制残余控制者的行动如何利用基础控制者的强性来保证安全操作。我们详细描述算法设计,并提议限制与基础控制者相对的剩余行动,以提高方法的稳健性。在Lyapunov稳定性理论的基础上,我们证明大量中枢闭环系统具有稳定性。我们实验性地验证了我们的方法,在幻灯片-骨架组合后如何在最优化学习过程中如何影响安全。