We consider the batch (off-line) policy learning problem in the infinite horizon Markov Decision Process. Motivated by mobile health applications, we focus on learning a policy that maximizes the long-term average reward. We propose a doubly robust estimator for the average reward and show that it achieves semiparametric efficiency. Further we develop an optimization algorithm to compute the optimal policy in a parameterized stochastic policy class. The performance of the estimated policy is measured by the difference between the optimal average reward in the policy class and the average reward of the estimated policy and we establish a finite-sample regret guarantee. The performance of the method is illustrated by simulation studies and an analysis of a mobile health study promoting physical activity.
翻译:我们在无限的地平线Markov决策程序中考虑(脱线)政策学习问题。在移动健康应用的推动下,我们注重学习一项最大限度地提高长期平均奖励的政策。我们为平均奖励提出一个双倍强的估算器,并表明它达到了半参数效率。我们进一步开发一个优化算法,在参数化的随机政策类中计算最佳政策。估计政策的绩效根据政策类最佳平均奖励与估计政策平均奖励之间的差异来衡量,我们建立了有限的遗憾保证。该方法的绩效通过模拟研究以及对促进体育活动的移动健康研究的分析加以说明。