Offline reinforcement learning, by learning from a fixed dataset, makes it possible to learn agent behaviors without interacting with the environment. However, depending on the quality of the offline dataset, such pre-trained agents may have limited performance and would further need to be fine-tuned online by interacting with the environment. During online fine-tuning, the performance of the pre-trained agent may collapse quickly due to the sudden distribution shift from offline to online data. While constraints enforced by offline RL methods such as a behaviour cloning loss prevent this to an extent, these constraints also significantly slow down online fine-tuning by forcing the agent to stay close to the behavior policy. We propose to adaptively weigh the behavior cloning loss during online fine-tuning based on the agent's performance and training stability. Moreover, we use a randomized ensemble of Q functions to further increase the sample efficiency of online fine-tuning by performing a large number of learning updates. Experiments show that the proposed method yields state-of-the-art offline-to-online reinforcement learning performance on the popular D4RL benchmark. Code is available: \url{https://github.com/zhaoyi11/adaptive_bc}.
翻译:离线强化学习,通过学习固定数据集,可以学习代理行为而不与环境互动。然而,根据离线数据集的质量,这种经过培训的代理物的性能可能有限,还需要通过与环境互动进行在线微调。在在线微调期间,由于从离线数据向在线数据的突然分配,培训前代理物的性能可能迅速崩溃。行为克隆损失等离线RL方法造成的限制在一定程度上防止了这一点,但这些限制也大大减缓了在线微调,迫使代理物接近行为政策。我们提议在根据代理物的性能和培训稳定性进行在线微调时,对行为克隆损失进行适应性加权。此外,我们使用随机化的Q函数组合,通过大量学习更新,进一步提高在线微调的抽样效率。实验显示,拟议的方法在流行的D4RL基准上产生最新的离线至线强化学习表现。代码:\urpive_the-art-offline-on-o-on-e-e-e-een struction struction press production production production production asyal.