We investigate the problem of best-policy identification in discounted Markov Decision Processes (MDPs) when the learner has access to a generative model. The objective is to devise a learning algorithm returning the best policy as early as possible. We first derive a problem-specific lower bound of the sample complexity satisfied by any learning algorithm. This lower bound corresponds to an optimal sample allocation that solves a non-convex program, and hence, is hard to exploit in the design of efficient algorithms. We then provide a simple and tight upper bound of the sample complexity lower bound, whose corresponding nearly-optimal sample allocation becomes explicit. The upper bound depends on specific functionals of the MDP such as the sub-optimality gaps and the variance of the next-state value function, and thus really captures the hardness of the MDP. Finally, we devise KLB-TS (KL Ball Track-and-Stop), an algorithm tracking this nearly-optimal allocation, and provide asymptotic guarantees for its sample complexity (both almost surely and in expectation). The advantages of KLB-TS against state-of-the-art algorithms are discussed and illustrated numerically.
翻译:当学习者有机会使用基因模型时,我们调查在折扣的Markov 决策程序(MDPs)中的最佳政策识别问题。目标是设计一种学习算法,尽早返回最佳政策。我们首先从任何学习算法所满足的抽样复杂性中得出一个因问题而特有的较低范围。这个较低范围相当于一种最佳的样本分配,解决了非 convex 程序,因此很难在设计高效算法时加以利用。然后我们提供一个简单和紧紧的低抽样复杂性的上层,其对应的样本分布变得十分明确。上限取决于MDP的具体功能,例如次优化差距和下一州值功能的差异,从而真正抓住了MDP的难度。最后,我们设计了一种跟踪这种几乎最优化的配置的算法,并为其样本复杂性提供了微小的保证(几乎是肯定的和预期的 ) 。讨论KLB-TS相对于州数字算法的优势。