Prioritized experience replay is a reinforcement learning technique whereby agents speed up learning by replaying useful past experiences. This usefulness is quantified as the expected gain from replaying the experience, a quantity often approximated as the prediction error (TD-error). However, recent work in neuroscience suggests that, in biological organisms, replay is prioritized not only by gain, but also by "need" -- a quantity measuring the expected relevance of each experience with respect to the current situation. Importantly, this term is not currently considered in algorithms such as prioritized experience replay. In this paper we present a new approach for prioritizing experiences for replay that considers both gain and need. Our proposed algorithms show a significant increase in performance in benchmarks including the Dyna-Q maze and a selection of Atari games.
翻译:优先经验重播是一种强化学习技术,通过重播有益的过去经验,代理加快了学习速度。这一有用性被量化为重播经验的预期收益,其数量通常相当于预测错误(TD-error),然而,神经科学的近期工作表明,在生物机体中,重播的优先顺序不仅包括收益,而且还包括“需要”——即衡量每种经验与当前形势的预期相关性的数量。重要的是,目前这一术语在诸如优先经验重播等算法中并没有被考虑。在本文中,我们提出了一个将既考虑收益又考虑需求的重播经验优先排序的新方法。我们提议的算法显示,在基准(包括Dyna-Q Maze)和选择Atari游戏方面的性能显著提高。