Offline reinforcement learning (RL) defines the task of learning from a fixed batch of data. Due to errors in value estimation from out-of-distribution actions, most offline RL algorithms take the approach of constraining or regularizing the policy with the actions contained in the dataset. Built on pre-existing RL algorithms, modifications to make an RL algorithm work offline comes at the cost of additional complexity. Offline RL algorithms introduce new hyperparameters and often leverage secondary components such as generative models, while adjusting the underlying RL algorithm. In this paper we aim to make a deep RL algorithm work while making minimal changes. We find that we can match the performance of state-of-the-art offline RL algorithms by simply adding a behavior cloning term to the policy update of an online RL algorithm and normalizing the data. The resulting algorithm is a simple to implement and tune baseline, while more than halving the overall run time by removing the additional computational overhead of previous methods.
翻译:离线强化学习( RL) 定义了从固定数据批量中学习的任务。 由于分配外行动的价值估计错误, 大多数离线 RL 算法采取限制政策或使政策与数据集中包含的行动正规化的方法。 建在原有的 RL 算法上, 使 RL 算法脱线的修改以额外的复杂性为代价。 离线 RL 算法引入了新的超参数, 并经常利用诸如基因模型等次要组成部分, 同时调整基本的 RL 算法 。 在本文中, 我们的目标是在做一个深度的 RL 算法工作的同时做出最小的修改 。 我们发现, 只需在在线 RL 算法的政策更新中添加一个行为克隆术语, 并实现数据正常化, 就可以匹配最先进的 RL LL 算法的性能 。 由此产生的算法很容易实施和调整基线, 同时通过删除先前方法的额外计算间接结果, 将整个运行时间减半以上 。