A simple and natural algorithm for reinforcement learning (RL) is Monte Carlo Exploring Starts (MCES), where the Q-function is estimated by averaging the Monte Carlo returns, and the policy is improved by choosing actions that maximize the current estimate of the Q-function. Exploration is performed by "exploring starts", that is, each episode begins with a randomly chosen state and action, and then follows the current policy to the terminal state. In the classic book on RL by Sutton & Barto (2018), it is stated that establishing convergence for the MCES algorithm is one of the most important remaining open theoretical problems in RL. However, the convergence question for MCES turns out to be quite nuanced. Bertsekas & Tsitsiklis (1996) provide a counter-example showing that the MCES algorithm does not necessarily converge. Tsitsiklis (2002) further shows that if the original MCES algorithm is modified so that the Q-function estimates are updated at the same rate for all state-action pairs, and the discount factor is strictly less than one, then the MCES algorithm converges. In this paper we make headway with the original and more efficient MCES algorithm given in Sutton & Barto (1998), establishing almost sure convergence for Optimal Policy Feed-Forward MDPs, which are MDPs whose states are not revisited within any episode when using an optimal policy. Such MDPs include a large class of environments such as all deterministic environments and all episodic environments with a timestep or any monotonically changing values as part of the state. Different from the previous proofs using stochastic approximations, we introduce a novel inductive approach, which is very simple and only makes use of the strong law of large numbers.
翻译:用于强化学习的简单自然算法( RL) 是 Monte Carlo Exploring Starts (MCES) 。 在MCES 经典的 RL 手册中, 确定 MCES 算法的趋同率是RL 中最重要的尚未解决的理论问题之一。 然而, MCES 的趋同率问题被选为相当细微的数值。 Bertsekas & Tsitsiklis (1996) 提供了反比实例, 表明 MCES 算法不一定会与随机选择的状态和动作一致, 然后遵循当前政策, 并遵循到终点状态状态。 Sutton & Barto 的原始算法只有调整率, 所有州- 和巴托( 2018年) 的QCES 算法估计值是相同的, 而折扣系数则严格低于1。 然而, MCESCE 的算法的趋同值问题显示, 将最终的逻辑环境变成一个稳定的MCES 。