Offline RL methods have been shown to reduce the need for environment interaction by training agents using offline collected episodes. However, these methods typically require action information to be logged during data collection, which can be difficult or even impossible in some practical cases. In this paper, we investigate the potential of using action-free offline datasets to improve online reinforcement learning, name this problem Reinforcement Learning with Action-Free Offline Pretraining (AFP-RL). We introduce Action-Free Guide (AF-Guide), a method that guides online training by extracting knowledge from action-free offline datasets. AF-Guide consists of an Action-Free Decision Transformer (AFDT) implementing a variant of Upside-Down Reinforcement Learning. It learns to plan the next states from the offline dataset, and a Guided Soft Actor-Critic (Guided SAC) that learns online with guidance from AFDT. Experimental results show that AF-Guide can improve sample efficiency and performance in online training thanks to the knowledge from the action-free offline dataset. Code is available at https://github.com/Vision-CAIR/AF-Guide.
翻译:离线RL方法已经证明可以通过使用离线收集的episode来训练代理,从而减少与环境交互的需求。然而,这些方法通常需要在数据收集期间记录动作信息,这在某些实际情况下可能会很难甚至不可能。在本文中,我们研究了使用无动作离线数据集来改进在线强化学习的潜力,将这个问题命名为基于无动作离线预训练指导的强化学习(AFP-RL)。我们介绍了一个称为Action-Free Guide(AF-Guide)的方法,该方法通过从无动作离线数据集中提取知识来指导在线训练。AF-Guide包括Action-Free Decision Transformer(AFDT),实现了一种倒置强化学习的变体,它学习从离线数据集中规划下一个状态,以及Guided Soft Actor-Critic(Guided SAC),它在AFDT的指导下进行在线学习。实验结果表明,由于来自无动作离线数据集的知识,AF-Guide可以提高在线训练的样本效率和性能。代码位于 https://github.com/Vision-CAIR/AF-Guide。