Off-policy reinforcement learning holds the promise of sample-efficient learning of decision-making policies by leveraging past experience. However, in the offline RL setting -- where a fixed collection of interactions are provided and no further interactions are allowed -- it has been shown that standard off-policy RL methods can significantly underperform. Recently proposed methods often aim to address this shortcoming by constraining learned policies to remain close to the given dataset of interactions. In this work, we closely investigate an important simplification of BCQ -- a prior approach for offline RL -- which removes a heuristic design choice and naturally restricts extracted policies to remain exactly within the support of a given behavior policy. Importantly, in contrast to their original theoretical considerations, we derive this simplified algorithm through the introduction of a novel backup operator, Expected-Max Q-Learning (EMaQ), which is more closely related to the resulting practical algorithm. Specifically, in addition to the distribution support, EMaQ explicitly considers the number of samples and the proposal distribution, allowing us to derive new sub-optimality bounds which can serve as a novel measure of complexity for offline RL problems. In the offline RL setting -- the main focus of this work -- EMaQ matches and outperforms prior state-of-the-art in the D4RL benchmarks. In the online RL setting, we demonstrate that EMaQ is competitive with Soft Actor Critic. The key contributions of our empirical findings are demonstrating the importance of careful generative model design for estimating behavior policies, and an intuitive notion of complexity for offline RL problems. With its simple interpretation and fewer moving parts, such as no explicit function approximator representing the policy, EMaQ serves as a strong yet easy to implement baseline for future work.
翻译:政策外强化学习有希望利用过去的经验对决策政策进行抽样高效学习。然而,在离线 RL 设置中 -- -- 提供固定的互动,不允许进一步互动 -- -- 显示标准非政策RL 方法可能明显不完善。最近提出的方法往往旨在通过限制学习的政策,使其与特定的互动数据集保持近距离,来弥补这一缺陷。在这项工作中,我们密切调查了BCQ的重要简化 -- -- 离线RL的先前方法 -- -- 消除了超常设计选择,自然限制了所抽取的政策完全保持在特定行为政策的支持范围之内。重要的是,与其最初的理论考虑相反,我们通过引入一个新的备份操作操作、预期-MAx QL 学习(EMaQ) 的方法来简化算法。具体地说,除了分发支持外,EMaQ 明确考虑样本和提议分布的更少模式, 允许我们得出新的亚精度组合,这可以作为离线 EL 定义的复杂度尺度。