A foundational problem in reinforcement learning and interactive decision making is to understand what modeling assumptions lead to sample-efficient learning guarantees, and what algorithm design principles achieve optimal sample complexity. Recently, Foster et al. (2021) introduced the Decision-Estimation Coefficient (DEC), a measure of statistical complexity which leads to upper and lower bounds on the optimal sample complexity for a general class of problems encompassing bandits and reinforcement learning with function approximation. In this paper, we introduce a new variant of the DEC, the Constrained Decision-Estimation Coefficient, and use it to derive new lower bounds that improve upon prior work on three fronts: - They hold in expectation, with no restrictions on the class of algorithms under consideration. - They hold globally, and do not rely on the notion of localization used by Foster et al. (2021). - Most interestingly, they allow the reference model with respect to which the DEC is defined to be improper, establishing that improper reference models play a fundamental role. We provide upper bounds on regret that scale with the same quantity, thereby closing all but one of the gaps between upper and lower bounds in Foster et al. (2021). Our results apply to both the regret framework and PAC framework, and make use of several new analysis and algorithm design techniques that we anticipate will find broader use.
翻译:在强化学习和互动决策方面,一个基本问题是了解什么是模型假设导致抽样高效学习保障,什么算法设计原则达到最佳抽样复杂性。最近,Foster等人(2021年)推出了“决定估计系数”(DEC),这是一种统计复杂性的衡量标准,它导致对包括土匪在内的一般问题类别的最佳抽样复杂性的上限和下限,而这些问题包括强盗和以功能近似法强化学习。在本文件中,我们引入了DEC的一个新变式,即 " 限制决策估计效率 ",并使用它来得出新的较低界限,从而改进了以前在三个方面的工作: - 它们保持期望,对审议中的算法类别没有限制。 - 它们在全球保持,不依赖Foster等人(2021年)所使用的本地化概念。 最有意思的是,它们允许DEC被界定为不适当的参考模型,确立了不适当的参考模型发挥根本作用。我们为这一规模提供了同样的遗憾,从而缩小了在Foster等人(2021年)中上下层和下层之间除了一个差距之外的所有差距,从而缩小了我们在Foster等人(2021年)使用新的设计框架和新的逻辑框架。