We study offline reinforcement learning (RL), which aims to learn an optimal policy based on a dataset collected a priori. Due to the lack of further interactions with the environment, offline RL suffers from the insufficient coverage of the dataset, which eludes most existing theoretical analysis. In this paper, we propose a pessimistic variant of the value iteration algorithm (PEVI), which incorporates an uncertainty quantifier as the penalty function. Such a penalty function simply flips the sign of the bonus function for promoting exploration in online RL, which makes it easily implementable and compatible with general function approximators. Without assuming the sufficient coverage of the dataset, we establish a data-dependent upper bound on the suboptimality of PEVI for general Markov decision processes (MDPs). When specialized to linear MDPs, it matches the information-theoretic lower bound up to multiplicative factors of the dimension and horizon. In other words, pessimism is not only provably efficient but also minimax optimal. In particular, given the dataset, the learned policy serves as the "best effort" among all policies, as no other policies can do better. Our theoretical analysis identifies the critical role of pessimism in eliminating a notion of spurious correlation, which emerges from the "irrelevant" trajectories that are less covered by the dataset and not informative for the optimal policy.
翻译:我们研究离线强化学习(RL),目的是根据事先收集的数据集学习最佳政策。由于缺乏与环境的进一步互动,离线RL的数据集覆盖面不足,无法进行大部分现有的理论分析。在本文中,我们建议了数值迭代算法(PEVI)的悲观变式,该变式包含一个不确定性量化符,作为惩罚功能。这种惩罚功能只是翻转了促进在线RL探索的奖金功能的标志,使其易于执行,并且与一般功能对应器兼容。由于该数据集没有足够覆盖,我们离线RL的数据集的覆盖面不够,因此,离线RL的数据集的覆盖范围太远,而远。我们在此文件中,我们建议对数值迭代算算算法(PEVI)进行一个悲观化变异的变异变异式,它把信息偏差的下限与尺寸和视野的多重因素相匹配。换句话说,悲观不仅具有可视效率,而且最优性,而且最优性也是最优性。特别是,考虑到数据集,我们所学的政策作为“最优性分析的理论性分析的一部分,而不是我们最贴切切性的政策,因此,最优性分析“最接近性分析“最接近性政策”的理论性分析,因此不能取代了我们最精确性的政策。