This paper provides an empirical evaluation of recently developed exploration algorithms within the Arcade Learning Environment (ALE). We study the use of different reward bonuses that incentives exploration in reinforcement learning. We do so by fixing the learning algorithm used and focusing only on the impact of the different exploration bonuses in the agent's performance. We use Rainbow, the state-of-the-art algorithm for value-based agents, and focus on some of the bonuses proposed in the last few years. We consider the impact these algorithms have on performance within the popular game Montezuma's Revenge which has gathered a lot of interest from the exploration community, across the the set of seven games identified by Bellemare et al. (2016) as challenging for exploration, and easier games where exploration is not an issue. We find that, in our setting, recently developed bonuses do not provide significantly improved performance on Montezuma's Revenge or hard exploration games. We also find that existing bonus-based methods may negatively impact performance on games in which exploration is not an issue and may even perform worse than $\epsilon$-greedy exploration.
翻译:本文对最近在Arcade Learning 环境中开发的勘探算法进行了经验评估。 我们研究利用各种奖励奖金鼓励在强化学习中进行探索。 我们这样做的方法是,确定所使用的学习算法,并只侧重于代理人业绩中不同勘探奖金的影响。 我们使用彩虹,即基于价值的代理人最先进的算法,并侧重于过去几年中提出的一些奖金。 我们考虑到这些算法对蒙祖马的复仇游戏业绩的影响,该游戏吸引了勘探界的许多兴趣,涉及Bellemare等人(Bellemare等人(2016年)确定的对勘探有挑战的七场游戏,以及较容易的游戏。我们发现,在我们所处的环境中,最近开发的奖金并不能大大改善蒙特祖马的报复或硬性勘探游戏的绩效。 我们还发现,基于奖金的现有方法可能会对勘探不成问题的游戏的绩效产生负面影响,甚至可能比Greedy探索更差。