Pre-training with offline data and online fine-tuning using reinforcement learning is a promising strategy for learning control policies by leveraging the best of both worlds in terms of sample efficiency and performance. One natural approach is to initialize the policy for online learning with the one trained offline. In this work, we introduce a policy expansion scheme for this task. After learning the offline policy, we use it as one candidate policy in a policy set. We then expand the policy set with another policy which will be responsible for further learning. The two policies will be composed in an adaptive manner for interacting with the environment. With this approach, the policy previously learned offline is fully retained during online learning, thus mitigating the potential issues such as destroying the useful behaviors of the offline policy in the initial stage of online learning while allowing the offline policy participate in the exploration naturally in an adaptive manner. Moreover, new useful behaviors can potentially be captured by the newly added policy through learning. Experiments are conducted on a number of tasks and the results demonstrate the effectiveness of the proposed approach.
翻译:使用离线数据进行预训练,然后利用强化学习进行在线微调,这是通过在样本效率和性能方面发挥最佳效果的有前途的学习控制策略的策略之一。一种自然的方法是使用离线训练的策略初始化进行在线学习。在这项工作中,我们介绍了一种策略扩展方案。在学习离线策略后,我们使用它作为策略集合中的一个候选策略。然后,我们通过另一个策略来扩展策略集合,该策略将负责进一步学习。两种策略将以自适应方式组合起来与环境交互。使用这种方法,先前离线学习的策略在在线学习过程中完全保留,从而减轻了离线策略在在线学习的初始阶段破坏有用行为的潜在问题,同时允许离线策略以自适应的方式自然地参与探索。此外,通过学习,新的有用行为可能被新添加的策略捕获。在许多任务上进行实验,结果显示了所提出方法的有效性。