We study the problem of off-policy evaluation (OPE) for episodic Partially Observable Markov Decision Processes (POMDPs) with continuous states. Motivated by the recently proposed proximal causal inference framework, we develop a non-parametric identification result for estimating the policy value via a sequence of so-called V-bridge functions with the help of time-dependent proxy variables. We then develop a fitted-Q-evaluation-type algorithm to estimate V-bridge functions recursively, where a non-parametric instrumental variable (NPIV) problem is solved at each step. By analyzing this challenging sequential NPIV problem, we establish the finite-sample error bounds for estimating the V-bridge functions and accordingly that for evaluating the policy value, in terms of the sample size, length of horizon and so-called (local) measure of ill-posedness at each step. To the best of our knowledge, this is the first finite-sample error bound for OPE in POMDPs under non-parametric models.
翻译:我们根据最近提出的近似因果推论框架,开发了非参数性鉴定结果,以便通过所谓的V桥功能序列,在时间依赖的代理变量的帮助下估计政策价值。然后,我们开发了一种适合的Q-评价型算法,对V桥功能进行反复估计,每个步骤都解决了非参数工具变量问题。我们通过分析这个具有挑战性的相继NPIV问题,为估算V桥功能设定了有限的抽样误差界限,并据此为评估每一步骤的抽样大小、地平线长度和所谓的(当地)不良衡量标准的政策价值设定了非参数性误差。据我们所知,这是在非参数模型下对POMDP的OPE进行的第一个有限抽样误差。