We develop a regression based primal-dual martingale approach for solving finite time horizon MDPs with general state and action space. As a result, our method allows for the construction of tight upper and lower biased approximations of the value functions, and, provides tight approximations to the optimal policy. In particular, we prove tight error bounds for the estimated duality gap featuring polynomial dependence on the time horizon, and sublinear dependence on the cardinality/dimension of the possibly infinite state and action space.From a computational point of view the proposed method is efficient since, in contrast to usual duality-based methods for optimal control problems in the literature, the Monte Carlo procedures here involved do not require nested simulations.
翻译:我们开发了一种基于回归的原始双martingale 方法, 以解决具有一般状态和行动空间的有限时空 MDP 。 结果, 我们的方法允许对数值函数构建紧紧的上下偏差近似值, 并为最佳政策提供近似值。 特别是, 我们证明, 估计的双重性差距存在严格的错误界限, 其表现是多角度依赖时间范围, 以及亚线性依赖可能无限状态和行动空间的基点/二线性。 从计算角度看, 提议的方法是有效的, 因为与文献中通常的基于双重性的最佳控制问题方法相反, 这里的蒙特卡洛程序不需要嵌套式模拟。