Learning tabula rasa, that is without any prior knowledge, is the prevalent workflow in reinforcement learning (RL) research. However, RL systems, when applied to large-scale settings, rarely operate tabula rasa. Such large-scale systems undergo multiple design or algorithmic changes during their development cycle and use ad hoc approaches for incorporating these changes without re-training from scratch, which would have been prohibitively expensive. Additionally, the inefficiency of deep RL typically excludes researchers without access to industrial-scale resources from tackling computationally-demanding problems. To address these issues, we present reincarnating RL as an alternative workflow, where prior computational work (e.g., learned policies) is reused or transferred between design iterations of an RL agent, or from one RL agent to another. As a step towards enabling reincarnating RL from any agent to any other agent, we focus on the specific setting of efficiently transferring an existing sub-optimal policy to a standalone value-based RL agent. We find that existing approaches fail in this setting and propose a simple algorithm to address their limitations. Equipped with this algorithm, we demonstrate reincarnating RL's gains over tabula rasa RL on Atari 2600 games, a challenging locomotion task, and the real-world problem of navigating stratospheric balloons. Overall, this work argues for an alternative approach to RL research, which we believe could significantly improve real-world RL adoption and help democratize it further.
翻译:没有任何事先知识的学习塔路拉萨是强化学习(RL)研究中普遍存在的工作流程。然而,如果应用到大型环境,RL系统很少操作塔路拉萨。这种大型系统在开发周期内经历了多重设计或算法变化,并且使用临时方法将这些变化纳入其中,而无需从头再培训,这代价太高。此外,深LL效率低下,通常使没有获得工业规模资源的研究者无法处理计算需求问题。为了解决这些问题,我们提出将RL作为替代工作流程,在以前计算工作(例如,学习过的政策)被重新利用或从一个RL代理的设计迭代之间转移,或从一个RL代理向另一个代理转移。作为使RL从任何代理向任何其他代理重新吸收障碍的一个步骤,我们侧重于将现有的亚最佳政策有效地转移到一个独立的基于价值的RL代理。我们发现,在这种设置中,我们没有提出一种简单的替代算法,以克服其成本的RAVL的难度。Atrequestrual,我们用一个具有挑战性的任务,我们展示了一种真实的Ralal-L listal labal labal labal comalalalalal,我们发现,我们无法将一个真正的任务重新定位,我们无法将一个真正的任务转换成一个真正的任务。