We consider a hybrid reinforcement learning setting (Hybrid RL), in which an agent has access to an offline dataset and the ability to collect experience via real-world online interaction. The framework mitigates the challenges that arise in both pure offline and online RL settings, allowing for the design of simple and highly effective algorithms, in both theory and practice. We demonstrate these advantages by adapting the classical Q learning/iteration algorithm to the hybrid setting, which we call Hybrid Q-Learning or Hy-Q. In our theoretical results, we prove that the algorithm is both computationally and statistically efficient whenever the offline dataset supports a high-quality policy and the environment has bounded bilinear rank. Notably, we require no assumptions on the coverage provided by the initial distribution, in contrast with guarantees for policy gradient/iteration methods. In our experimental results, we show that Hy-Q with neural network function approximation outperforms state-of-the-art online, offline, and hybrid RL baselines on challenging benchmarks, including Montezuma's Revenge.
翻译:我们考虑一种混合强化学习环境(Hybrid RL),在这种环境中,代理商可以获得离线数据集和通过真实世界在线互动收集经验的能力。这个框架缓解了纯离线和在线RL设置中出现的挑战,允许在理论和实践两方面设计简单和高效的算法。我们通过将传统的Q学习/查找算法调整到混合环境(我们称之为混合Q学习或Hy-Q)。在我们理论结果中,我们证明当离线数据集支持高质量政策和环境已经捆绑双线级时,算法既具有计算效率,也具有统计效率。值得注意的是,我们不需要对最初分布的覆盖面进行假设,而没有政策梯度/命名方法的保证。我们实验结果显示,有神经网络的Hy-Q功能是近似于在线、离线和混合RL基准(包括蒙特祖马的Revenge)的状态。</s>