A compelling use case of offline reinforcement learning (RL) is to obtain a policy initialization from existing datasets, which allows efficient fine-tuning with limited amounts of active online interaction. However, several existing offline RL methods tend to exhibit poor online fine-tuning performance. On the other hand, online RL methods can learn effectively through online interaction, but struggle to incorporate offline data, which can make them very slow in settings where exploration is challenging or pre-training is necessary. In this paper, we devise an approach for learning an effective initialization from offline data that also enables fast online fine-tuning capabilities. Our approach, calibrated Q-learning (Cal-QL) accomplishes this by learning a conservative value function initialization that underestimates the value of the learned policy from offline data, while also being calibrated, in the sense that the learned Q-values are at a reasonable scale. We refer to this property as calibration, and define it formally as providing a lower bound on the true value function of the learned policy and an upper bound on the value of some other (suboptimal) reference policy, which may simply be the behavior policy. We show that offline RL algorithms that learn such calibrated value functions lead to effective online fine-tuning, enabling us to take the benefits of offline initializations in online fine-tuning. In practice, Cal-QL can be implemented on top of existing conservative methods for offline RL within a one-line code change. Empirically, Cal-QL outperforms state-of-the-art methods on 10/11 fine-tuning benchmark tasks that we study in this paper.
翻译:离线强化学习(RL)的一个令人信服的应用案例是,从现有的数据集中获取政策初始化,这样可以以有限的积极在线互动数量进行高效的微调。然而,现有的一些离线RL方法在网上微调性能上表现不佳。另一方面,在线RL方法可以通过在线互动有效地学习,但很难纳入离线数据,这在探索具有挑战性或培训前需要的环境下会使它们变得非常缓慢。在本文件中,我们设计了一种方法,从离线数据中学习有效的初始化,从而能够实现快速在线微调能力。我们的方法,校准的Q-L(Cal-QL),通过学习一种保守的值初始化功能,低估了从离线数据中学习的政策价值,同时正在校准。我们把这个属性称为校准,正式的定义是,在所学政策的正确值功能上提供一个较低的约束,在其它的(次级微调)参考政策中,使在线基准值的初始值化政策(R-L),这或许就是在网上校准性水平政策中,我们从上学习了某种微的升级方法。</s>