Learning a predictive model of the mean return, or value function, plays a critical role in many reinforcement learning algorithms. Distributional reinforcement learning (DRL) methods instead model the value distribution, which has been shown to improve performance in many settings. In this paper, we model the value distribution as approximately normal using the Markov Chain central limit theorem. We analytically compute quantile bars to provide a new DRL target that is informed by the decrease in standard deviation that occurs over the course of an episode. In addition, we propose a policy update strategy based on uncertainty as measured by structural characteristics of the value distribution not present in the standard value function. The approach we outline is compatible with many DRL structures. We use two representative on-policy algorithms, PPO and TRPO, as testbeds and show that our methods produce performance improvements in continuous control tasks.
翻译:学习平均回报或价值函数的预测模型在许多强化学习算法中发挥着关键作用。 分配强化学习方法( DRL) 代替了价值分布模型, 显示它能改善许多环境的性能。 在本文中, 我们用 Markov 链中枢限制理论, 将价值分布模式作为大致正常的模型。 我们分析计算了四分点, 以提供新的 DRL 目标, 其依据是事件过程中标准偏差的下降。 此外, 我们根据标准值函数中不存在的值分布结构特征所测量的不确定性, 提出了一个政策更新战略。 我们所描述的方法与许多 DRL 结构相容。 我们用两个政策算法的代表, PPO 和 TRPO 来作为测试台, 并显示我们的方法在连续控制任务中产生了绩效的改进。