We consider the offline reinforcement learning problem, where the aim is to learn a decision making policy from logged data. Offline RL -- particularly when coupled with (value) function approximation to allow for generalization in large or continuous state spaces -- is becoming increasingly relevant in practice, because it avoids costly and time-consuming online data collection and is well suited to safety-critical domains. Existing sample complexity guarantees for offline value function approximation methods typically require both (1) distributional assumptions (i.e., good coverage) and (2) representational assumptions (i.e., ability to represent some or all $Q$-value functions) stronger than what is required for supervised learning. However, the necessity of these conditions and the fundamental limits of offline RL are not well understood in spite of decades of research. This led Chen and Jiang (2019) to conjecture that concentrability (the most standard notion of coverage) and realizability (the weakest representation condition) alone are not sufficient for sample-efficient offline RL. We resolve this conjecture in the positive by proving that in general, even if both concentrability and realizability are satisfied, any algorithm requires sample complexity polynomial in the size of the state space to learn a non-trivial policy. Our results show that sample-efficient offline reinforcement learning requires either restrictive coverage conditions or representation conditions that go beyond supervised learning, and highlight a phenomenon called over-coverage which serves as a fundamental barrier for offline value function approximation methods. A consequence of our results for reinforcement learning with linear function approximation is that the separation between online and offline RL can be arbitrarily large, even in constant dimension.
翻译:我们认为离线强化学习问题,其目的在于从登录数据中学习决策政策。离线RL -- -- 特别是当与(价值)功能近似值结合,以便在大或连续的州空间中普遍化时 -- -- 在实践中越来越具有相关性,因为它避免了昂贵和耗时的在线数据收集,并且非常适合安全关键领域。离线值优化方法的现有抽样复杂性保障通常要求(1) 分配假设(即覆盖面良好)和(2) 代表假设(即代表部分或全部Q美元价值功能的能力),比监督学习所需要的强。然而,尽管进行了数十年的研究,但这些条件的必要性和离线RL的基本限制界限没有得到很好的理解。这导致陈江和江(2019年)推测,光是离线值功能的集中性(最标准的覆盖范围概念)和真实性(最弱的表述条件)不足以满足离线运行的样本效率。我们从总体上看,即使分级性和可实现在线学习的精确度,也能够证明,即使分级和可实现在线递增的精确性,任何递增政策的精度要求不断的精度的精度的精确度,从而显示我们不精度的精确度的校准性标准的校准性标准的校正的校正的校外演算结果。