Reinforcement learning (RL) is attracting attention as an effective way to solve sequential optimization problems that involve high dimensional state/action space and stochastic uncertainties. Many such problems involve constraints expressed by inequality constraints. This study focuses on using RL to solve constrained optimal control problems. Most RL application studies have dealt with inequality constraints by adding soft penalty terms for violating the constraints to the reward function. However, while training neural networks to learn the value (or Q) function, one can run into computational issues caused by the sharp change in the function value at the constraint boundary due to the large penalty imposed. This difficulty during training can lead to convergence problems and ultimately lead to poor closed-loop performance. To address this issue, this study proposes a dynamic penalty (DP) approach where the penalty factor is gradually and systematically increased during training as the iteration episodes proceed. We first examine the ability of a neural network to represent a value function when uniform, linear, or DP functions are added to prevent constraint violation. The agent trained by a Deep Q Network (DQN) algorithm with the DP function approach was compared with agents with other constant penalty functions in a simple vehicle control problem. Results show that the proposed approach can improve the neural network approximation accuracy and provide faster convergence when close to a solution.
翻译:强化学习(RL)正在吸引人们的注意,作为解决涉及高维度状态/行动空间和随机不确定因素的连续优化问题的有效途径,许多这类问题涉及不平等制约因素,本研究的重点是利用RL解决限制的最佳控制问题。本研究侧重于利用RL解决限制最佳控制问题。大多数RL应用研究通过增加违反奖励功能限制的软处罚条款,处理不平等制约因素。然而,在培训神经网络学习价值(或Q)功能时,人们可能会遇到由于施加的巨额处罚而使制约边界的功能值发生急剧变化而引发的计算问题。在培训期间,这种困难可能导致趋同问题,最终导致闭路性表现不佳。为解决这一问题,本研究提出了动态处罚(DP)办法,在培训过程中,惩罚因素会随着循环期的进行而逐步和系统地增加。我们首先研究神经网络在统一、线性或DP功能被添加以防止违反时代表价值功能的能力。在深Q网络(DQN)与DP函数方法的计算中,与具有其他固定刑罚功能的代理人进行比较,在简单车辆控制过程中,可以提供更精确性的解决办法。