We consider off-policy evaluation of dynamic treatment rules under the assumption that the underlying system can be modeled as a partially observed Markov decision process (POMDP). We propose an estimator, partial history importance weighting, and show that it can consistently estimate the stationary mean rewards of a target policy given long enough draws from the behavior policy. Furthermore, we establish an upper bound on its error that decays polynomially in the number of observations (i.e., the number of trajectories times their length), with an exponent that depends on the overlap of the target and behavior policies, and on the mixing time of the underlying system. We also establish a polynomial minimax lower bound for off-policy evaluation under the POMDP assumption, and show that its exponent has the same qualitative dependence on overlap and mixing time as obtained in our upper bound. Together, our upper and lower bounds imply that off-policy evaluation in POMDPs is strictly harder than off-policy evaluation in (fully observed) Markov decision processes, but strictly easier than model-free off-policy evaluation.
翻译:我们考虑对动态处理规则进行非政策性评估,前提是基础系统可以模拟为部分遵守的马尔科夫决策程序(POMDP ) 。 我们提出一个估算器,部分历史重要性加权,并表明它能够一贯地估计目标政策的固定平均回报,从行为政策中抽取的时间足够长。 此外,我们对其错误设定了一个上限,该错误在观测数量上多得多地衰减(即轨道长度是其长度的倍数 ), 其提示值取决于目标和行为政策的重叠, 以及基础系统的混合时间。 我们还在POMDP的假设下为非政策评价设定了一个多边最低限值, 并表明其优先度对重叠和混合时间的质量依赖与我们上限值相同。 我们的上限和下限加限意味着,在(完全观察的)马尔科夫决策过程中,对非政策评价的严格来说比非政策性评价难,但严格地说比无模式的离政策评价容易。