Offline RL methods have been shown to reduce the need for environment interaction by training agents using offline collected episodes. However, these methods typically require action information to be logged during data collection, which can be difficult or even impossible in some practical cases. In this paper, we investigate the potential of using action-free offline datasets to improve online reinforcement learning, name this problem Reinforcement Learning with Action-Free Offline Pretraining (AFP-RL). We introduce Action-Free Guide (AF-Guide), a method that guides online training by extracting knowledge from action-free offline datasets. AF-Guide consists of an Action-Free Decision Transformer (AFDT) implementing a variant of Upside-Down Reinforcement Learning. It learns to plan the next states from the offline dataset, and a Guided Soft Actor-Critic (Guided SAC) that learns online with guidance from AFDT. Experimental results show that AF-Guide can improve sample efficiency and performance in online training thanks to the knowledge from the action-free offline dataset.
翻译:离线RL方法已经表明,培训人员利用离线收集的情况减少了环境互动的需要,但是,这些方法通常要求在数据收集期间记录行动信息,这在某些实际情况下可能是困难的,甚至是不可能的。在本文件中,我们调查了利用不采取行动离线数据集改进在线强化学习的潜力,将这一问题命名为“以不采取行动离线前培训加强学习”(AFP-RL),我们引入了“不行动指南”(AF-Guide)这一指导在线培训的方法,从不采取行动离线数据集中提取知识。AF-Guide包括一个“不行动决定变换器”(AFDT),以实施“上向上倾斜加强学习”的变式。它学会从离线数据集和“方向Soft Acor-Critic”(Guided SAC)中规划下一个状态,在网上学习AFDDTT的指南。实验结果表明,AF-Guide可以提高在线培训的抽样效率和绩效,因为从不采取行动离线数据集获得知识。