In recent years, reinforcement learning (RL) has gained increasing attention in control engineering. Especially, policy gradient methods are widely used. In this work, we improve the tracking performance of proximal policy optimization (PPO) for arbitrary reference signals by incorporating information about future reference values. Two variants of extending the argument of the actor and the critic taking future reference values into account are presented. In the first variant, global future reference values are added to the argument. For the second variant, a novel kind of residual space with future reference values applicable to model-free reinforcement learning is introduced. Our approach is evaluated against a PI controller on a simple drive train model. We expect our method to generalize to arbitrary references better than previous approaches, pointing towards the applicability of RL to control real systems.
翻译:近年来,强化学习(RL)在控制工程中日益受到重视,特别是政策梯度方法被广泛使用。在这项工作中,我们通过纳入未来参考值信息,改进了对任意参考信号的准政策优化(PPO)的跟踪工作。介绍了扩展行为者和评论家的论点并考虑未来参考值的两种变式。在第一种变式中,将增加全球未来参考值。在第二种变式中,引入了一种新颖的剩余空间,其中含有适用于无模型强化学习的未来参考值。我们的方法是用一个简单的驱动器模型的PI控制器来评价我们的方法。我们期望我们的方法比以往的方法更好地概括任意引用,指出RL对控制真实系统的适用性。