Learning a predictive model of the mean return, or value function, plays a critical role in many reinforcement learning algorithms. Distributional reinforcement learning (DRL) methods instead model the value distribution, which has been shown to improve performance in many settings. In this paper, we model the value distribution as approximately normal using the Markov Chain central limit theorem. We analytically compute quantile bars to provide a new DRL target that is informed by the decrease in standard deviation that occurs over the course of an episode. In addition, we suggest an exploration strategy based on how closely the learned value distribution resembles the target normal distribution to make the value function more accurate for better policy improvement. The approach we outline is compatible with many DRL structures. We use proximal policy optimization as a testbed and show that both the normality-guided target and exploration bonus produce performance improvements. We demonstrate our method outperforms DRL baselines on a number of continuous control tasks.
翻译:学习平均回报或价值函数的预测模型,在许多强化学习算法中发挥着关键作用。 分配强化学习方法( DRL) 代替了价值分配模式, 这已证明可以改善许多环境的性能。 在本文中, 我们用 Markov 链中枢限制理论, 将价值分配模式作为大致正常的模型。 我们分析地计算了个数列, 以提供一个新的 DRL 目标, 其依据是某一事件过程中标准偏差的下降。 此外, 我们建议了一种探索战略, 其依据是, 所学到的值分布与目标正常分布的相似, 以使价值函数更精确地改进政策。 我们所描述的方法与许多 DRL 结构相容。 我们使用准度政策优化作为测试仪, 并显示正常性指导目标与勘探红利都产生性能改进。 我们展示的方法优于一系列连续控制任务的 DRL 基线 。