The past few years have seen rapid progress in combining reinforcement learning (RL) with deep learning. Various breakthroughs ranging from games to robotics have spurred the interest in designing sophisticated RL algorithms and systems. However, the prevailing workflow in RL is to learn tabula rasa, which may incur computational inefficiency. This precludes continuous deployment of RL algorithms and potentially excludes researchers without large-scale computing resources. In many other areas of machine learning, the pretraining paradigm has shown to be effective in acquiring transferable knowledge, which can be utilized for a variety of downstream tasks. Recently, we saw a surge of interest in Pretraining for Deep RL with promising results. However, much of the research has been based on different experimental settings. Due to the nature of RL, pretraining in this field is faced with unique challenges and hence requires new design principles. In this survey, we seek to systematically review existing works in pretraining for deep reinforcement learning, provide a taxonomy of these methods, discuss each sub-field, and bring attention to open problems and future directions.
翻译:在过去几年里,在将强化学习与深层次学习相结合方面取得了迅速的进展。从游戏到机器人的各种突破都激发了人们对设计精密的RL算法和系统的兴趣。然而,RL的当前工作流程是学习可能造成计算效率低下的 tabula rasa,这排除了连续部署RL算法,并有可能将没有大规模计算资源的研究者排除在外。在机器学习的许多其他领域,培训前的范式表明,在获得可转让知识方面是有效的,可以用于各种下游任务。最近,我们看到深层RL培训前的兴趣激增,取得了有希望的成果。然而,许多研究都基于不同的实验环境。由于REL的性质,该领域的预先培训面临独特的挑战,因此需要新的设计原则。在这次调查中,我们寻求系统地审查深入强化学习培训前的现有工作,提供这些方法的分类,讨论每个子领域,并关注开放的问题和今后的方向。