Offline reinforcement learning (RL) defines the task of learning from a static logged dataset without continually interacting with the environment. The distribution shift between the learned policy and the behavior policy makes it necessary for the value function to stay conservative such that out-of-distribution (OOD) actions will not be severely overestimated. However, existing approaches, penalizing the unseen actions or regularizing with the behavior policy, are too pessimistic, which suppresses the generalization of the value function and hinders the performance improvement. This paper explores mild but enough conservatism for offline learning while not harming generalization. We propose Mildly Conservative Q-learning (MCQ), where OOD actions are actively trained by assigning them proper pseudo Q values. We theoretically show that MCQ induces a policy that behaves at least as well as the behavior policy and no erroneous overestimation will occur for OOD actions. Experimental results on the D4RL benchmarks demonstrate that MCQ achieves remarkable performance compared with prior work. Furthermore, MCQ shows superior generalization ability when transferring from offline to online, and significantly outperforms baselines.
翻译:离线强化学习( RL) 定义了在不与环境持续互动的情况下从静态登录数据集学习的任务。 所学政策和行为政策之间的分布变化使得价值功能必须保持保守, 以便不会严重过高地估计超出分配( OOOD) 的行动。 但是, 现有的方法, 惩罚看不见的行动或规范行为政策, 过于悲观, 抑制了对价值功能的概括化, 阻碍了业绩的改进 。 本文探讨了离线学习的温和但足够的保守主义, 同时又不影响一般化 。 我们提议了 缩略保守的 Q 学习( MCQ Q ), 即 OOD 的行动通过分配适当的假Q 值来积极培训。 我们理论上显示, MCQ 将产生至少与行为政策一样的政策, 不会对 OOD 行动产生错误的过高估计 。 D4RL 基准的实验结果显示, MCQ 与先前的工作相比, 取得了显著的绩效。 此外, MCQ 显示从离线转移到网上, 明显超出基准 。