In model-free deep reinforcement learning (RL) algorithms, using noisy value estimates to supervise policy evaluation and optimization is detrimental to the sample efficiency. As this noise is heteroscedastic, its effects can be mitigated using uncertainty-based weights in the optimization process. Previous methods rely on sampled ensembles, which do not capture all aspects of uncertainty. We provide a systematic analysis of the sources of uncertainty in the noisy supervision that occurs in RL, and introduce inverse-variance RL, a Bayesian framework which combines probabilistic ensembles and Batch Inverse Variance weighting. We propose a method whereby two complementary uncertainty estimation methods account for both the Q-value and the environment stochasticity to better mitigate the negative impacts of noisy supervision. Our results show significant improvement in terms of sample efficiency on discrete and continuous control tasks.
翻译:在无模型深度强化学习(RL)算法中,使用噪音价值估计来监督政策评价和优化,对抽样效率有害。由于这种噪音是杂交的,因此在优化过程中可以使用基于不确定性的权重来减轻其影响。以前的方法依靠抽样组合,并不包含不确定性的方方面面。我们系统地分析在RL中出现的噪音监督中的不确定性来源,并引入逆差RL,即巴伊西亚框架,将概率组合和批量反差异权重结合起来。我们提出了一个方法,即两种互补的不确定性估计方法既考虑到Q值,又考虑到环境的随机性,以便更好地减轻噪音监督的负面影响。我们的结果显示,离散和连续控制任务的抽样效率有了显著提高。