We study model-based offline Reinforcement Learning with general function approximation. We present an algorithm named Constrained Pessimistic Policy Optimization (CPPO) which leverages a general function class and uses a constraint to encode pessimism. Under the assumption that the ground truth model belongs to our function class, CPPO can learn with the offline data only providing partial coverage, i.e., it can learn a policy that completes against any policy that is covered by the offline data, in polynomial sample complexity with respect to the statistical complexity of the function class. We then demonstrate that this algorithmic framework can be applied to many specialized Markov Decision Processes where the additional structural assumptions can further refine the concept of partial coverage. One notable example is low-rank MDP with representation learning where the partial coverage is defined using the concept of relative condition number measured by the underlying unknown ground truth feature representation. Finally, we introduce and study the Bayesian setting in offline RL. The key benefit of Bayesian offline RL is that algorithmically, we do not need to explicitly construct pessimism or reward penalty which could be hard beyond models with linear structures. We present a posterior sampling-based incremental policy optimization algorithm (PS-PO) which proceeds by iteratively sampling a model from the posterior distribution and performing one-step incremental policy optimization inside the sampled model. Theoretically, in expectation with respect to the prior distribution, PS-PO can learn a near optimal policy under partial coverage with polynomial sample complexity.
翻译:我们用一般函数近似值研究基于模型的离线强化学习。 我们展示了名为 Constract Pessicist Policy Poptical (CPPO) 的算法, 它利用了一个普通功能级, 并使用一个约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性约束性政策)