This paper studies the statistical theory of offline reinforcement learning with deep ReLU networks. We consider the off-policy evaluation (OPE) problem where the goal is to estimate the expected discounted reward of a target policy given the logged data generated by unknown behaviour policies. We study a regression-based fitted Q evaluation (FQE) method using deep ReLU networks and characterize a finite-sample bound on the estimation error of this method under mild assumptions. The prior works in OPE with either general function approximation or deep ReLU networks ignore the data-dependent structure in the algorithm, dodging the technical bottleneck of OPE, while requiring a rather restricted regularity assumption. In this work, we overcome these limitations and provide a comprehensive analysis of OPE with deep ReLU networks. In particular, we precisely quantify how the distribution shift of the offline data, the dimension of the input space, and the regularity of the system control the OPE estimation error. Consequently, we provide insights into the interplay between offline reinforcement learning and deep learning.
翻译:本文研究了与深ReLU网络进行离线强化学习的统计理论。 我们考虑了离政策评价的问题,目标是根据未知行为政策产生的记录数据估计目标政策的预期折扣奖励。 我们用深ReLU网络研究一个基于回归的配置Q评价方法,并用深ReLU网络对基于该方法的估计错误的有限抽样进行定性,在轻度假设下对这种方法的估计错误进行限定。 OPE以前使用一般功能近似或深ReLU网络的工程忽略了算法中的数据依赖结构,隐藏了OPE的技术瓶颈,同时需要相当有限的定期假设。 在这项工作中,我们克服了这些限制,并用深ReLU网络对OPE进行了全面分析。 特别是,我们准确地量化了离线数据的分配变化、输入空间的尺寸以及系统控制OPE估计错误的规律性。 因此,我们对离线强化学习与深层次学习之间的相互作用提供了见解。