The ability to discover optimal behaviour from fixed data sets has the potential to transfer the successes of reinforcement learning (RL) to domains where data collection is acutely problematic. In this offline setting, a key challenge is overcoming overestimation bias for actions not present in data which, without the ability to correct for via interaction with the environment, can propagate and compound during training, leading to highly sub-optimal policies. One simple method to reduce this bias is to introduce a policy constraint via behavioural cloning (BC), which encourages agents to pick actions closer to the source data. By finding the right balance between RL and BC such approaches have been shown to be surprisingly effective while requiring minimal changes to the underlying algorithms they are based on. To date this balance has been held constant, but in this work we explore the idea of tipping this balance towards RL following initial training. Using TD3-BC, we demonstrate that by continuing to train a policy offline while reducing the influence of the BC component we can produce refined policies that outperform the original baseline, as well as match or exceed the performance of more complex alternatives. Furthermore, we demonstrate such an approach can be used for stable online fine-tuning, allowing policies to be safely improved during deployment.
翻译:从固定数据集中发现最佳行为的能力有可能将强化学习(RL)的成功成功转移到数据收集存在严重问题的领域。在这一离线设置中,一项关键挑战是如何克服数据中不存在的行动的过高估计偏差,因为如果数据没有能力通过与环境的互动加以纠正,则在培训期间传播和复合,从而导致极不理想的政策。减少这种偏差的一个简单方法是通过行为性克隆(BC)引入政策制约,鼓励代理商选择更接近源数据的行动。通过在RL和BC之间找到正确的平衡,这些方法已经显示出出乎意料的效果,同时要求对基于它们的基本算法进行最低限度的修改。迄今为止,这一平衡一直保持不变,但在这项工作中,我们探索了在初步培训后将这一平衡倾斜到RL的想法。我们使用TD3-BC来证明,通过继续培训政策脱线,同时减少BC组成部分的影响,我们可以产生比原始基线更接近或超过复杂替代方法的性能的精细的政策。此外,我们证明,在初步培训期间,可以安全地进行在线调整,从而允许安全地进行在线调整。