In this research, some of the issues that arise from the scalarization of the multi-objective optimization problem in the Advantage Actor Critic (A2C) reinforcement learning algorithm are investigated. The paper shows how a naive scalarization can lead to gradients overlapping. Furthermore, the possibility that the entropy regularization term can be a source of uncontrolled noise is discussed. With respect to the above issues, a technique to avoid gradient overlapping is proposed, while keeping the same loss formulation. Moreover, a method to avoid the uncontrolled noise, by sampling the actions from distributions with a desired minimum entropy, is investigated. Pilot experiments have been carried out to show how the proposed method speeds up the training. The proposed approach can be applied to any Advantage-based Reinforcement Learning algorithm.
翻译:在这项研究中,对 " 有利作用者批评(A2C)强化学习算法(A2C) " 中多目标优化问题的升级所产生的一些问题进行了调查,文件说明了天真加速化如何会导致梯度重叠;此外,还讨论了通缩正规化术语可能成为不受控制噪音的来源的可能性;关于上述问题,建议采用避免梯度重叠的技术,同时保留同样的损失配方;此外,还调查了一种避免不受控制的噪音的方法,即用理想的最小通缩率对分布的动作进行取样;进行了试点试验,以显示拟议方法如何加快培训速度;拟议的方法可适用于任何基于高级的加强学习算法。