A common technique in reinforcement learning is to evaluate the value function from Monte Carlo simulations of a given policy, and use the estimated value function to obtain a new policy which is greedy with respect to the estimated value function. A well-known longstanding open problem in this context is to prove the convergence of such a scheme when the value function of a policy is estimated from data collected from a single sample path obtained from implementing the policy (see page 99 of [Sutton and Barto, 2018], page 8 of [Tsitsiklis, 2002]). We present a solution to the open problem by showing that a first-visit version of such a policy iteration scheme indeed converges to the optimal policy provided that the policy improvement step uses lookahead [Silver et al., 2016, Mnih et al., 2016, Silver et al., 2017b] rather than a simple greedy policy improvement. We provide results both for the original open problem in the tabular setting and also present extensions to the function approximation setting, where we show that the policy resulting from the algorithm performs close to the optimal policy within a function approximation error.
翻译:强化学习的一个常见技术是评估蒙特卡洛模拟某项政策的价值功能,并使用估计值功能来获得对估计值函数贪婪的新政策。在这方面一个众所周知的长期未决问题是,当一项政策的价值功能是根据执行政策(见[Sutton和Barto,2018年]第9页,[Tsitsiklis,2002年]第8页)从单一抽样路径收集到的数据估算出来的,而政策的价值功能是根据执行政策(见[Sutton和Barto,2018年]第99页]收集的,从单一样本路径收集到的数据来估算的,在这种情况下,如果政策改进步骤使用外观[Silver等人,2016年,Mnih等人,2016年,2016年,Silver等人,2017年b],而不是简单的贪婪政策改进,则证明这种政策的第一个访问版本确实与最佳政策趋同。我们提供了在表格设置中最初的公开问题的结果,并提供了功能近比值设置的延伸,我们在那里表明,从算算出的政策在功能近误差内接近最佳政策的结果。</s>